From: Quadibloc on
Morten Reistad wrote:
> In article <1173149299.699296.318780(a)t69g2000cwt.googlegroups.com>,
> Quadibloc <jsavard(a)ecn.ab.ca> wrote:
..
> >In any case, one might only wish to do 36-bit arithmetic, but nothing
> >stops one from using 72-bit addresses.
..
> Ah.
..
> For a while you were very close to reinventing the PDP10.

My intention was to create something rather different from the PDP 10.

In two ways.

Instead of 16 general registers, I will continue with 8 base registers
and 8 arithmetic/index registers.

Instructions will be:

10 bits opcode

2 mode bits

3 bits destination register (destination index)

3 bits source register (source index)

if the mode bit says the source or destination is in memory, we add an
18 bit address specifier, having the form

3 bits base register specification

15 bits displacement

So instead of re-inventing the PDP-10, I am re-inventing the IBM 360
as a 36-bit computer instead of a 32-bit computer. With a touch of the
orthogonality of the PDP-11. What a difference incrementing by one
makes!

Another difference: the PDP-10 has byte instructions that fit any
number of bytes of any length in a 36-bit word. My architecture will
*only* handle those byte widths and number sizes that fit *exactly*
into a 144-bit memory fetch. Starting with 6-bit characters, the
smallest practical item.

6,8,9,12,16,18,24,36,48,72.

John Savard

From: Steve O'Hara-Smith on
On Wed, 07 Mar 2007 00:19:16 +0000
Andrew Swallow <am.swallow(a)btopenworld.com> wrote:

> Unicode is up to 100,000 characters. You can put that in 18 bits.

Unicode defines code points from 0000 - 0x10FFFF somewhat over
1,000,000 characters - to quote from the Unicode FAQ ...

-------------------------------------------------------------------
Both Unicode and ISO 10646 have policies in place that formally limit
future code assignment to the integer range that can be expressed with
current UTF-16 (0 to 1,114,111).
-------------------------------------------------------------------

That's too big for 18 bits.

--
C:>WIN | Directable Mirror Arrays
The computer obeys and wins. | A better way to focus the sun
You lose and Bill collects. | licences available see
| http://www.sohara.org/
From: Rich Alderson on
Morten Reistad <first(a)last.name> writes:

> A really large PDP-10 had 4kW/18Mb of ram. This is a little beyond current
> L2 caches, but 2kW/9MB should be doable. A large installation had 2-4G
> disk. Also doable as current RAM.

A *really large* PDP-10 has (note tense) 128MW (and you meant 4MW, not 4KW,
above). That's an XKL Toad-1 with 4 32MW boards (minimal memory on system is
32MW). The PDP-10 did not die with DEC.

--
Rich Alderson | /"\ ASCII ribbon |
news(a)alderson.users.panix.com | \ / campaign against |
"You get what anybody gets. You get a lifetime." | x HTML mail and |
--Death, of the Endless | / \ postings |
From: John Mashey on
On Mar 7, 5:38 am, "Quadibloc" <jsav...(a)ecn.ab.ca> wrote:
> David Kanter wrote:
> > Frankly, using non powers of 2 seems like a rather odd design choice,
> > and I have trouble thinking of why you'd do it.
>
> Actually, the only time that powers of 2 matter is if one is doing bit
> addressing, and since hardly anyone does that, whether the width of a
> word is a power of two or not doesn't matter.

Well, actually...
1) Minor: shift instructions lose a little encoding density for non-
power-of-2-sized registers.

2) Major: I repeat what I posted in 1994 and 2000:

"For whatever reasons, IBM chose 8-bit bytes for the S/360, in the
early
1960s, long before there were any Intel chips or any microprocessors.
[Contrary to popular belief, computing did not start with micros :-)].

Once IBM did that, we were going to have 8-bit bytes throughout the
industry, because almost everyone was going to have to deal with
streams of 8-byte bytes, which was either excruciating if your
hardware
supported 7-bit bytes, and wasteful of space if you used 9-bits.
(recall that this happened in the core memory era, where a 512KB
machine
was quite large.) Had they chosen 9-bit bytes, and 36-bit words,
that's what we'd have, (and the 7090 crowd would have been happier). "




From: Nick Maclaren on

In article <1173302581.467078.240170(a)n33g2000cwc.googlegroups.com>,
"John Mashey" <old_systems_guy(a)yahoo.com> writes:
|>
|> 2) Major: I repeat what I posted in 1994 and 2000:
|>
|> "For whatever reasons, IBM chose 8-bit bytes for the S/360, in the
|> early
|> 1960s, long before there were any Intel chips or any microprocessors.
|> [Contrary to popular belief, computing did not start with micros :-)].
|>
|> Once IBM did that, we were going to have 8-bit bytes throughout the
|> industry, ...

And, to clarify, IBM's dominance of the IT industry in the 1960s was
comparable to Microsoft's domination of software in the 1990s.

One could also claim that the standardisation of twos complement for
integers and signed magnitude for floating-point was settled at the
same time and for the same reasons.

It may not have been ABSOLUTELY certain that people followed IBM
(after all, we didn't go with EBCDIC), but it was damn close to it.


Regards,
Nick Maclaren.