From: Quadibloc on
On my web site, at

http://www.quadibloc.com/arch/perint.htm

I have started a page exploring an imaginary 'perfect' computer
architecture.

Struggling with many opcode formats with which I was not completely
satisfied in my imaginary architecture that built opcodes up from 16-
bit elements, I note that an 18-bit basic element for an instruction
solves the problems previously seen, by opening up large vistas of
additional opcode space.

Even more to the point, if one fetches four 36-bit words from memory
in a single operation, not only do aligned 36-bit and 72-bit floats
fit nicely in this, but so do 48-bit floating-point numbers. These
provide 10 digits of precision, and are therefore very useful, since
they allow a shorter, faster floating-point type to be used for many
situations where single precision does not fit, but double-precision
is overkill.

Making single precision 36 bits instead of 32 is likely to be useful
as well, if the complaints of many programmers at the time of the 7090
to 360 changeover were justified.

Traditional 36-bit computers used 36-bit instruction words, fetched 36
bits from memory at a time, and therefore did not offer a 48-bit
floating point type.

Note that fetching 144 bits from memory at a time means one can have
not just 6-bit and 9-bit characters, but even 8-bit characters too, as
144 is a multiple of 16. So this kind of implementation is no longer
at war with the 8-bit world.

I think such an architecture is too great a departure from current
norms to be considered, but this seems to be a disappointment, as it
seems that it has many merits - involving being neither too big nor
too small, but "just right".

John Savard

From: Nick Maclaren on

In article <1173100839.970159.172460(a)j27g2000cwj.googlegroups.com>,
"Quadibloc" <jsavard(a)ecn.ab.ca> writes:
|>
|> Making single precision 36 bits instead of 32 is likely to be useful
|> as well, if the complaints of many programmers at the time of the 7090
|> to 360 changeover were justified.

The problems with System/360 arithmetic were almost entirely due to
its truncation, and very little to do with its base or even its very
small single precision.

|> I think such an architecture is too great a departure from current
|> norms to be considered, but this seems to be a disappointment, as it
|> seems that it has many merits - involving being neither too big nor
|> too small, but "just right".

Er, no. It may have been then, but 36 bits is too small for modern
systems. The optimal word size has been increasing steadily over
the years, a fact that should surprise nobody.


Regards,
Nick Maclaren.
From: Andrew Swallow on
Quadibloc wrote:
> On my web site, at
>
> http://www.quadibloc.com/arch/perint.htm
>
> I have started a page exploring an imaginary 'perfect' computer
> architecture.
>
> Struggling with many opcode formats with which I was not completely
> satisfied in my imaginary architecture that built opcodes up from 16-
> bit elements, I note that an 18-bit basic element for an instruction
> solves the problems previously seen, by opening up large vistas of
> additional opcode space.
>
> Even more to the point, if one fetches four 36-bit words from memory
> in a single operation, not only do aligned 36-bit and 72-bit floats
> fit nicely in this, but so do 48-bit floating-point numbers. These
> provide 10 digits of precision, and are therefore very useful, since
> they allow a shorter, faster floating-point type to be used for many
> situations where single precision does not fit, but double-precision
> is overkill.
>
> Making single precision 36 bits instead of 32 is likely to be useful
> as well, if the complaints of many programmers at the time of the 7090
> to 360 changeover were justified.
>
> Traditional 36-bit computers used 36-bit instruction words, fetched 36
> bits from memory at a time, and therefore did not offer a 48-bit
> floating point type.
>
> Note that fetching 144 bits from memory at a time means one can have
> not just 6-bit and 9-bit characters, but even 8-bit characters too, as
> 144 is a multiple of 16. So this kind of implementation is no longer
> at war with the 8-bit world.
>
> I think such an architecture is too great a departure from current
> norms to be considered, but this seems to be a disappointment, as it
> seems that it has many merits - involving being neither too big nor
> too small, but "just right".

You can implement this on an FPGA. Internally FPGAs can support
144 bit wide buses. You will need memory management logic to handle
off chip memory access, the width of that is set by the RAM module
manufactures.

Andrew Swallow
From: Quadibloc on
Nick Maclaren wrote:
> Er, no. It may have been then, but 36 bits is too small for modern
> systems. The optimal word size has been increasing steadily over
> the years, a fact that should surprise nobody.

Given that I'm using a data type that goes three to a 144-bit
quadword, I figure I'll have to require that base register addresses
be quadword addresses.

That means that a 36-bit base register provides the equivalent of a 40-
bit virtual address, which nicely matches the number of address lines
brought out on some current architectures.

In any case, one might only wish to do 36-bit arithmetic, but nothing
stops one from using 72-bit addresses.

John Savard

From: Torben =?iso-8859-1?Q?=C6gidius?= Mogensen on
"Quadibloc" <jsavard(a)ecn.ab.ca> writes:

> I have started a page exploring an imaginary 'perfect' computer
> architecture.
>
> Struggling with many opcode formats with which I was not completely
> satisfied in my imaginary architecture that built opcodes up from 16-
> bit elements, I note that an 18-bit basic element for an instruction
> solves the problems previously seen, by opening up large vistas of
> additional opcode space.
>
> Even more to the point, if one fetches four 36-bit words from memory
> in a single operation, not only do aligned 36-bit and 72-bit floats
> fit nicely in this, but so do 48-bit floating-point numbers.
> [...]
> I think such an architecture is too great a departure from current
> norms to be considered, but this seems to be a disappointment, as it
> seems that it has many merits - involving being neither too big nor
> too small, but "just right".

Need for more opcode space is not a very good reason to increase the
word-size (as used for numbers etc.) -- Many processors have opcodes
that are of a different size than the wordsize. Also, the trend these
days seems to be decreasing opcode size -- several 32-bit RISC CPUs
have added 16-bit opcodes to reduce code size. If you can't fit what
you want into a single 32-bit word, you might consider splitting some
instructions in two -- you pay when you use these, but not when using
instructions that fit into 32 bits, unlike if you go to a uniform
36-bit opcode, where all instructions pay for the size of the largest.

And fixed-size opcodes seems to be on the way out also -- Thumb2
freely mixes 16 and 32 bit instructions, and in x86 that has a very
variable opcode size, handling this takes up only a small fraction of
the die-space, and with caching of decoded instructions, the time
overhead is also very limited.

As for using 36 bits to increase number precision over 32 bits, the
step is too small, and the effort of handling strings without waste is
a considerable complication (in particular in C-like languages, where
you expect to have pointers to individual characters in a string).

A more logical intermediate step between 32 and 64 bits is 48 bits --
you have a whole number of 8 or 16 bit characters in a word, so you
can still have byte addressability. But power-of-two words do have a
clear advantage in alignment and fast scaling of indexes to pointers.

If you want 36-bit word, you should consider 6-bit characters, so you
have a whole number of characters per word -- that was done on some
older computers (like the UNIVAC 1100), which used fieldata
characters.

Torben