From: Stephen Fuld on
Torben �gidius Mogensen wrote:

snip

> UNIVAC 1100 also used a 9-bit ASCII. I don't recall what extra
> characters (if any) were added.

No extra characters were added. The extra bits were usually zeros.
However, they could be used in a special mode of I/O to stop a write
operation at a character boundary, thus allowing writing of tape blocks
with non-word multiple block sizes.

--
- Stephen Fuld
(e-mail address disguised to prevent spam)
From: Rich Alderson on
Morten Reistad <first(a)last.name> writes:

> For a while you were very close to reinventing the PDP10.

> 36 bit string descriptors that can have any byte length from 1-18

If you're talking about byte pointers, the byte length legally runs from 1-36--
and in extended memory there are shorthand byte pointers for common length-and-
position combinations involving values higher than 36.

--
Rich Alderson | /"\ ASCII ribbon |
news(a)alderson.users.panix.com | \ / campaign against |
"You get what anybody gets. You get a lifetime." | x HTML mail and |
--Death, of the Endless | / \ postings |
From: David Kanter on
On Mar 5, 5:20 am, "Quadibloc" <jsav...(a)ecn.ab.ca> wrote:
> On my web site, at
>
> http://www.quadibloc.com/arch/perint.htm
>
> I have started a page exploring an imaginary 'perfect' computer
> architecture.
>
> Struggling with many opcode formats with which I was not completely
> satisfied in my imaginary architecture that built opcodes up from 16-
> bit elements, I note that an 18-bit basic element for an instruction
> solves the problems previously seen, by opening up large vistas of
> additional opcode space.

Why is 18 bits any better than 32 bits?

> Even more to the point, if one fetches four 36-bit words from memory
> in a single operation, not only do aligned 36-bit and 72-bit floats
> fit nicely in this, but so do 48-bit floating-point numbers.

Is this really important? It seems like if you want to have anything
resembling a successful general purpose architecture, you need to
support IEEE FP.

> These
> provide 10 digits of precision, and are therefore very useful, since
> they allow a shorter, faster floating-point type to be used for many
> situations where single precision does not fit, but double-precision
> is overkill.

If DP is 'overkill', what is the price you pay for such overkill?
It's not clear to me that there is a large enough advantage to your
scheme to merit using non-IEEE floating point, and I doubt that the
case can be made effectively.

> Making single precision 36 bits instead of 32 is likely to be useful
> as well, if the complaints of many programmers at the time of the 7090
> to 360 changeover were justified.

That was an awfully long time ago.

> Traditional 36-bit computers used 36-bit instruction words, fetched 36
> bits from memory at a time, and therefore did not offer a 48-bit
> floating point type.

How many 36 bit computers still exist? Almost every general purpose
architecture is now ~32b with 64b support as well.

> Note that fetching 144 bits from memory at a time means one can have
> not just 6-bit and 9-bit characters, but even 8-bit characters too, as
> 144 is a multiple of 16. So this kind of implementation is no longer
> at war with the 8-bit world.
>
> I think such an architecture is too great a departure from current
> norms to be considered, but this seems to be a disappointment, as it
> seems that it has many merits - involving being neither too big nor
> too small, but "just right".

Why is 64b too big, and 32b too small? You haven't made a case for
that whatsoever...

Frankly, using non powers of 2 seems like a rather odd design choice,
and I have trouble thinking of why you'd do it.

DK

From: David W Schroth on
Torben �gidius Mogensen wrote:
> Walter Bushell <proto(a)oanix.com> writes:
>
>
>>In article <7zejo2fyar.fsf(a)app-0.diku.dk>,
>> torbenm(a)app-0.diku.dk (Torben AEgidius Mogensen) wrote:
>>
>>
>>>A more logical intermediate step between 32 and 64 bits is 48 bits --
>>>you have a whole number of 8 or 16 bit characters in a word, so you
>>>can still have byte addressability. But power-of-two words do have a
>>>clear advantage in alignment and fast scaling of indexes to pointers.
>>>
>>>If you want 36-bit word, you should consider 6-bit characters, so you
>>>have a whole number of characters per word -- that was done on some
>>>older computers (like the UNIVAC 1100), which used fieldata
>>>characters.
>>>
>>
>>How about 9 bit characters? Or even 12. One could get a great extended
>>ASCII that would cover most of the world's languages with 12 bits.
>
>
> UNIVAC 1100 also used a 9-bit ASCII. I don't recall what extra
> characters (if any) were added.

Minor correction - the Univac 1100 still uses 9-bit ASCII (as well as
6-bit Fieldata).

FWIW, the machine can natively manipulate 6-, 9-, 12-, 18-, and 36-bit
characters. (Despite what Wikipedia says, the 12-bit characters are
always signed, as are the 36-bit characters. The 6- and 9-bit
characters are always unsigned, and the 18-bit characters can be either).
>
> In any case, there are fairly universally adopted 7-bit (ASCII), 8-bit
> (ISO 8859-X) and 16-bit (Unicode) character sets, so it wold be
> difficult to get universal acceptance of a new character set with a
> different size -- especially as 99% of all code that operates on
> characters have assumptions about the size of a character.
>
> Torben
From: Morten Reistad on
In article <mdd8xeauol8.fsf(a)panix5.panix.com>,
Rich Alderson <news(a)alderson.users.panix.com> wrote:
>Morten Reistad <first(a)last.name> writes:
>
>> For a while you were very close to reinventing the PDP10.
>
>> 36 bit string descriptors that can have any byte length from 1-18
>
>If you're talking about byte pointers, the byte length legally runs from 1-36--
>and in extended memory there are shorthand byte pointers for common length-and-
>position combinations involving values higher than 36.

The usefulness of the byte pointer becomes somewhat diminished
beyond 18 bit bytes, although it does still work.

Word instructions get more useful at that point.

I forgot the extended memory pointers though.

-- mrr