From: Mark Thorson on
I recently acquired an IBM 650 manual, and I was
interested to learn that it used a biquinary
representation for integers. This is a decimal
format, in which five bits represent the values
0-4 or 5-9 and two binary bits select between
the upper and lower range of values. (In other
biquinary formats, a single binary bit may be
used to select between upper and lower.)

For example, 7d = 0111b = 10 00100 (biquinary)

Doing a few web searches, I was surprised by
how common biquinary was among the old vacuum
tube machines: ENIAC, UNIVAC I, UNIVAC II, etc.

Why was this representation so popular?
Surely these people were aware of base-2, and
its more compact representation of integers.
Here are some possible reasons:

SIMPLER ERROR CHECKING -- many early machines had
extensive error checking facilities. On the 650,
any number with more or less than one set bit in
either the binary or quinary fields was detected
as an error. All single-bit errors were caught.
In a base-2 representation, all bit patterns are
legal values.

HUMAN FRIENDLY -- the early machines had no
software tools. Biquinary was easier for humans
to program in machine code.

FEWER GATES -- biquinary allowed reducing the
amount of logic by using ring counters for adding
by repeated increments. Instead of implementing
a full adder for every bit, you only need to
handle the binary bit with parallel logic.
Biquinary has advantages over a 10-bit decimary
representation because the worst-case number of
cycles for addition-by-counting is cut in half,
and the number of bits needed to represent a digit
is much smaller (although not quite as small as
base-2 binary).

During an era when gates and flip-flops were
very dear, biquinary was king. Would it make
sense to bring back biquinary today?
Here are some possible reasons:

FPGA IMPLEMENTATIONS -- even today's FPGA
devices (and large hierarchical PLA devices
sometimes marketed as "FPGA") have woefully
few gate-equivalents and bits. For these
devices, logic is as dear as it was in the
days of vacuum tubes. Applications that
implement numeric human interfaces (keypads,
displays, etc.) may be more efficient
implemented in biquinary as the native
representation.

SIMPLER, FASTER DECODING -- a biquinary format is
already partially decoded. If you have to decode
the values of a numeric field, the gates in a
biquinary decoder can have fewer inputs than in
a base-2 decoder, and none of the inputs are
complement. Decoding a base-2 field requires
having inputs from the true or complement value
of every bit in the field. Memory chips that
implement biquinary addressing can eliminate one
gate delay each from the row and column decoding
logic, so memory will be faster. Because memory
is the limiting factor on system speed, biquinary
computers will actually outrun their base-2
counterparts, even though their CPU arithmetic
will be slightly slower (assuming a parallel
biquinary arithmetic implementation, not an
addition-by-counting implementation).

At the low end, in embedded systems and FPGA
devices, biquinary makes sense because of the
economy in logic, with its accompanying reduction
in die size, power consumption, radiated EMI, etc.

At the high end, biquinary systems would be
inherently faster due to faster memory cycles.
You get human-friendly machine code for free.

Across the full range of size and performance, it
makes sense to implement biquinary as the native
representation of integer data and addresses.
Now that I've explained it, does everybody agree
that base-2 should be discarded as a relic of
late 20th century technology, appropriate for
its time which has now passed?






From: =?ISO-8859-1?Q?Niels_J=F8rgen_Kruse?= on
Mark Thorson <nospam(a)sonic.net> wrote:

> Memory chips that
> implement biquinary addressing can eliminate one
> gate delay each from the row and column decoding
> logic, so memory will be faster. Because memory
> is the limiting factor on system speed, biquinary
> computers will actually outrun their base-2
> counterparts, even though their CPU arithmetic
> will be slightly slower (assuming a parallel
> biquinary arithmetic implementation, not an
> addition-by-counting implementation).

Wire delay is the limiting factor on memory speed. One gate delay is
completely irrelevant.

--
Mvh./Regards, Niels Jýrgen Kruse, Vanlýse, Denmark
From: Jon Beniston on
>
> FPGA IMPLEMENTATIONS -- even today's FPGA
> devices (and large hierarchical PLA devices
> sometimes marketed as "FPGA") have woefully
> few gate-equivalents and bits. For these
> devices, logic is as dear as it was in the
> days of vacuum tubes. Applications that
> implement numeric human interfaces (keypads,
> displays, etc.) may be more efficient
> implemented in biquinary as the native
> representation.

Er? Even the smallest current FPGAs support tens of thousands of gates
with the larger devices supporting hundreds of thousands.

http://direct.xilinx.com/bvdocs/publications/ds112.pdf

Cheers,
Jon
From: Ian Rogers on
Asynchronous logic (http://www.google.com/search?q=asynchronous+logic)
considers a range of different data encodings including M-of-N, where N
wires encode the data on M wires. From your explanation, biquinary
sounds like one possible M-of-N encoding. There is an effect on power
and logic complexity of these designs and quite an old history
(http://www.cs.man.ac.uk/async/background/return_async.html). For
example, asynchronous logic was used in the MU5 for control.

Ian Rogers
From: John Savard on
Mark Thorson <nospam(a)sonic.net> wrote in message news:<4248AB7A.FF5B46B5(a)sonic.net>...

> FEWER GATES -- biquinary allowed reducing the
> amount of logic by using ring counters for adding
> by repeated increments.

That's precisely why we can't use biquinary today. Adding would become
much slower, as it would be done by counting to five.

Also, it would waste bits in memory.

If *decimal* got revived, it would be by means of something like
Chen-Ho encoding, or the recent elaboration of it known as Densely
Packed Decimal.

John Savard