From: JJ on

Phil Tomson wrote:
> In article <1147155282.274065.16140(a)>,
> JJ <johnjakson(a)> wrote:
> >
> >Phil Tomson wrote:
> >> In article <1146975146.177800.163180(a)>,
> >> JJ <johnjakson(a)> wrote:
> >> >
> >
> >snipping
> >
> >
> >Transputers & FPGAs two sides of the same process coin
> >
> Are there any transputers still being made?
> Phil

I believe so but only for embedded use in set top boxes by ST and if
ordered in the millions. At one time they had IIRC 70-80% of that space
locked up with ST20 derivatives which stripped down the Transputer of
its links and added more conventional serial ports, plus set top
specific IP cores. They also gutted the whole thinking, no more
Transputer occam speak, the scheduler really is now more controlled by
the application and is programmed in plain C. They really mights as
well have just started over with a simpler RISC core and gone from
there. There was a reference just a few years ago in EET of a newer
500MHz ASIC std cell part put together by the San Diego center. They
might still have ST20 pdfs on the website but they really only have a
handfull of customers.

Also another legacy of the links is the IEEE 1355 SpaceWire link
derived from the T9000 links, and even the HyperTransport links are
familiar, some same people involved.

My main reasoning for promulgating this sort of modern version of
Transputer architecture is that in its previous form it had no where to
go being an historical design locked into the 80s. But I realised that
the Memory Wall and current cache designs kills modern versions of cpus
that originally started even before the Transputer.

The main idea I push is that a Thread Wall can replace the Memory Wall
and that threaded cpus allow for incredibly simple Processr Element v
the OoO,SS, designs we have now. Lots of PEs giving lots of threads are
relatively free, it doesn't really matter if PEs are idle, the Memory
bandwidth is where the real cost is and that limits the no of PEs that
can be used in one package. The coventional thinking has it the other
way around, expensive cpus with ever higher theoretical IPCs with cheap

John Jakson
transputer guy

From: Jeremy Ralph on
My view is that the divide between hardware and software should be
reduced -- starting at the requirements and specification stage -- to
achieve better compute efficiency. Many software folks don't
understand how FPGAs and the concurrent nature of RTL / ESL can benefit
their application... Little do they realize that the portion of their
algorithm which accounts for 80% of their CPU utilization might be done
in 1/50 the time with a specialized accelerator / co-processor. On the
other side of the coin, however, many hardware guys don't properly
understand concepts like recursion, aspect oriented programming, object
oriented programming, UML, etc. Not to mention graph and tree data

Both hardware and software designers should begin to take a
system-level view, and understand that an efficient system is a balance
between sequential software and parallel hardware; where partitioning
decisions are governed by cost / benefit tradeoffs. Today and in the
past it seems that choosing software over custom hardware was all too
easy (except for applications like tele / datacom, which would be
impossible without specialized HW). After all SW is easier to develop,
well understood, and the price / performance of CPUs is amazing.
None-the-less the price / performance of FPGAs is also increasing
making them a viable option for those who seek to accelerate their
specialized SW algorithms.

From: Jeremy Ralph on
Any FPGA DIMM interface modules on the market today? This shounds

PDTi [ ]
SpectaReg -- Spec-down code and doc generation for register maps

First  |  Prev  | 
Pages: 1 2 3 4 5 6 7 8
Prev: systemc
Next: sqrt(a^2 + b^2) in synthesizable VHDL?