From: ChrisQuayle on
Peter "Firefly" Lund wrote:

> You can write systems code in anything.
>
> Think about the amount of Z80 and 6502/6510 machine code people wrote in
> the eighties. Yes, a lot of it didn't even use assembler.

Halcyon days ?. Remember programming manually in hex, 2 pass, with
branch targets entered on the second pass. Not sure that I would want to
return to those days though. High level languages may encourage less
thought about machine internals, but they do get the job done faster.

> I'm not going to implement the bank switching (sorry, "virtual memory"),
> the dual address spaces, or the two or three different protection levels.
>
> I might not even implement the simple stack-oriented floating-point
> instructions.
>

It does vary a lot, but a bare bones implementation would be quite
usefull for embedded work.

>
> I can handle TTL and other digital things just fine on paper but what
> kills me is all the analog there really is in digital electronics.
>

I guess the analogue is in the transition time, but noise, ringing,
setup and hold times, delay, race conditions are likely to be more
problematical. If you don't already have them, it may be worth looking
out some of the early 70's and 80's books on finite state machine
design, as well as classics like the Amd book on bit slice
microprocessor design. Mick and Brick, iirc. Never got to design a bit
slice machine, but some of the books were quite inspiring.

>
>
> No, it never was all that bad. It just looks a lot worse than it
> actually is. Ok, the conditional branches had some stupid mnemonics,
> that one actually slowed me down a lot. The PDP-11 conditional branches
> had marginally better names.
>
> -Peter

Here we differ - compared to other architectures, x86 was hard work at
both hardware design and assembler level. The cleanest from Intel was
the pre x86 8080, but nearly everything since smells of camel and looks
overcomplicated. To be fair, part of this is the need to maintain
backwards compatability, but once the pc arrived, there was just a small
window of opportunity to start again with a clean sheet of paper and
design for the future, yet they missed the boat completely. There were
other valid candidates at the time, quite a few competing 16/32 bit
micros in the 80's. Nat semi had the 16032 series, Zilog Z8000, Texas
had the micro'd version of the TI900, but the 68k and its descendants
are the only real survivors still in volume production, that is, not
riding the pc wave. Why ?, at least partly because they look and feel
clean design - easy to program and design hardware for. If you want to
communicate ideas, make the languages and interfaces easy to learn etc.
The emphasis may have shifted away from bare metal, but someone has to
write the compilers. To me, elegant, clean design starts at the
foundations and is not something that can be glued on later at higher
levels...

Chris
From: "Peter "Firefly" Lund" on
On Thu, 4 Jan 2007, ChrisQuayle wrote:

> Halcyon days ?.

No, not really. I'm just saying that people can write systems code in
anything. People did.

> I guess the analogue is in the transition time, but noise, ringing, setup and
> hold times, delay, race conditions are likely to be more problematical. If

Timing and race conditions are easy. Come on, race conditions, I mean,
really. Races are not hard, okay?

No, it's things like terminating resistors and decoupling capacitors that
I need to get comfortable with. And maybe parasitics. And making sure
to tie /every/ unused pin to GND or VCC through a resistor. And making
sure GND never bounces. And that VCC is always stable. Probably also
getting the reset right in combination with the voltage ramping at power
on.

> you don't already have them, it may be worth looking out some of the early
> 70's and 80's books on finite state machine design, as well as classics like

Finite state machines are not hard, as long as we can stay digital.

> Here we differ - compared to other architectures, x86 was hard work at both
> hardware design and assembler level. The cleanest from Intel was the pre x86
> 8080,

What?!?! How many different voltages did you need to supply to it? And
in what order did they have to turned on and off? And you are telling me
that it was easier to interface to it than the 8086/8088?

Sure, the 8086/8088 multiplexed address and data on the same pins. , which
the 8080 didn't. A few latch chips are enough to take care of that.
That's /easy/.

Lack of built-in memory refresh is a bigger problem for a small machine of
that time.

The early 6502 Apple machines used the screen access to, as a side effect,
get the memory refreshed. That required rearranging the address bits to
the DRAM chips a bit but it was otherwise not expensive to build. As far
as I know, no other 6502-machine did it like that so it can't have been
too obvious. Some say it didn't work so well but what do I know, I never
had one.

Usually one had to build a special refresh circuit. The early PC did that
with a timer channel and DMA channel.

The Z80 CP/M machines and home computers avoided the problem entirely by
using a CPU with built-in memory refresh support.

The CBM-64 solved it by putting the refresh circuit into a corner of one
of its custom chips.

Software-wise, the 8086/8088 had a nice and simple mechanism for expanding
the address space. The 8080, Z80, and PDP-11 had bank-switching. Guess
what I prefer?

It also had better 16-bit support than the 8080, which /also/ had the
somewhat stiff bindings between some registers and some instructions.

The 8086 had far, far better support for parameter passing and local
variables on the stack.

> the boat completely. There were other valid candidates at the time, quite a
> few competing 16/32 bit micros in the 80's. Nat semi had the 16032 series,
> Zilog Z8000, Texas had the micro'd version of the TI900, but the 68k and its

Look at how they expanded beyond 16 address bits. The 68K did it cleanly,
the 8086/8088 did it almost as well. The only problem was big arrays,
really, and the 8086/8088 mechanism was a lot cheaper than the 68K's PLUS
it was backwards-compatible. Pretty well done for an emergency project.

Intel's address extension technique for 8086/8088 was /so/ much better
than Zilog's for the Z8000. Zilog clearly can't have had much input from
anybody who actually programmed. Their scheme disappointed me when I
first read about it as a teenager. The NS16032/32016, on the other hand,
is a CPU I know very little about. It seems to have been slow and buggy
but otherwise nice.

I don't know enough about TMS9900 and TMS99000 to have an opinion other
than its design led to good interrupt response times and must have/would
have become a pain when CPUs got faster faster than RAM chips did (almost
all "registers" were really just a small workspace in RAM pointed to by a
real register). Actual memory use, such as array indexing, was a bit
slow, wasn't it? And 16-bits only?

> compilers. To me, elegant, clean design starts at the foundations and is not
> something that can be glued on later at higher levels...

Really?

Can't say I really agree. I think there is much to be said for the
incremental approach. Sometimes it produces something elegant, sometimes
it doesn't, but usually, it produces something that's useful.

-Peter
From: jacko on

Peter "Firefly" Lund wrote:
> On Thu, 4 Jan 2007, ChrisQuayle wrote:
>
> > Halcyon days ?.
>
> No, not really. I'm just saying that people can write systems code in
> anything. People did.

and still will

> > I guess the analogue is in the transition time, but noise, ringing, setup and
> > hold times, delay, race conditions are likely to be more problematical. If
>
> Timing and race conditions are easy. Come on, race conditions, I mean,
> really. Races are not hard, okay?

i do wonder how quartus II fpga compilier handles (or optionally
handles) this automatically. A grey code variable state transition
algorithm

> No, it's things like terminating resistors and decoupling capacitors that
> I need to get comfortable with. And maybe parasitics. And making sure
> to tie /every/ unused pin to GND or VCC through a resistor. And making
> sure GND never bounces. And that VCC is always stable. Probably also
> getting the reset right in combination with the voltage ramping at power
> on.

a development board pre made would be a good prospect. let any analog
designer sort out their own board layout if needed.

> > you don't already have them, it may be worth looking out some of the early
> > 70's and 80's books on finite state machine design, as well as classics like
>
> Finite state machines are not hard, as long as we can stay digital.

the tools to automatically enter/draw these are expensive. and a
pipeline significantly can add some complexity. (must not forget the
delays)

> > Here we differ - compared to other architectures, x86 was hard work at both
> > hardware design and assembler level. The cleanest from Intel was the pre x86
> > 8080,
>
> What?!?! How many different voltages did you need to supply to it? And
> in what order did they have to turned on and off? And you are telling me
> that it was easier to interface to it than the 8086/8088?

long live the simplyfied 68K

> Sure, the 8086/8088 multiplexed address and data on the same pins. , which
> the 8080 didn't. A few latch chips are enough to take care of that.
> That's /easy/.
>
> Lack of built-in memory refresh is a bigger problem for a small machine of
> that time.
>
> The early 6502 Apple machines used the screen access to, as a side effect,
> get the memory refreshed. That required rearranging the address bits to
> the DRAM chips a bit but it was otherwise not expensive to build. As far
> as I know, no other 6502-machine did it like that so it can't have been
> too obvious. Some say it didn't work so well but what do I know, I never
> had one.

nice idea.

> Usually one had to build a special refresh circuit. The early PC did that
> with a timer channel and DMA channel.
>
> The Z80 CP/M machines and home computers avoided the problem entirely by
> using a CPU with built-in memory refresh support.
>
> The CBM-64 solved it by putting the refresh circuit into a corner of one
> of its custom chips.
>
> Software-wise, the 8086/8088 had a nice and simple mechanism for expanding
> the address space. The 8080, Z80, and PDP-11 had bank-switching. Guess
> what I prefer?

the segment register method?

> It also had better 16-bit support than the 8080, which /also/ had the
> somewhat stiff bindings between some registers and some instructions.

un avoidable with short opcodes.

> The 8086 had far, far better support for parameter passing and local
> variables on the stack.
>
> > the boat completely. There were other valid candidates at the time, quite a
> > few competing 16/32 bit micros in the 80's. Nat semi had the 16032 series,
> > Zilog Z8000, Texas had the micro'd version of the TI900, but the 68k and its
>
> Look at how they expanded beyond 16 address bits. The 68K did it cleanly,
> the 8086/8088 did it almost as well. The only problem was big arrays,
> really, and the 8086/8088 mechanism was a lot cheaper than the 68K's PLUS
> it was backwards-compatible. Pretty well done for an emergency project.
>
> Intel's address extension technique for 8086/8088 was /so/ much better
> than Zilog's for the Z8000. Zilog clearly can't have had much input from
> anybody who actually programmed. Their scheme disappointed me when I
> first read about it as a teenager. The NS16032/32016, on the other hand,
> is a CPU I know very little about. It seems to have been slow and buggy
> but otherwise nice.
>
> I don't know enough about TMS9900 and TMS99000 to have an opinion other
> than its design led to good interrupt response times and must have/would
> have become a pain when CPUs got faster faster than RAM chips did (almost
> all "registers" were really just a small workspace in RAM pointed to by a
> real register). Actual memory use, such as array indexing, was a bit
> slow, wasn't it? And 16-bits only?

cache eliminates this problem. a cache line as a work space? the 16 bit
"problem?" would suit embedded world, and doubling all word sizes would
be a suitable 32 bit upgrade. the extra 16 bit of opcode space could
easily cover fpu and other subsections in a 32 bit processor. this also
works good for 64 bit expansion too.

if your thinking "overflow is not backwards compatable for branch
counting" then your assembly skills need some adaptation.

> > compilers. To me, elegant, clean design starts at the foundations and is not
> > something that can be glued on later at higher levels...
>
> Really?
>
> Can't say I really agree. I think there is much to be said for the
> incremental approach. Sometimes it produces something elegant, sometimes
> it doesn't, but usually, it produces something that's useful.

i agree, now all i need is a C compilier template where i can fill in
the generated code for xor, and, load and sum. so that all operations
are defined high level in terms of these, and a section of code to load
and save variables from the stack.

cheers

http://indi.microfpga.com

From: "Peter "Firefly" Lund" on
On Thu, 4 Jan 2007, jacko wrote:

>> No, not really. I'm just saying that people can write systems code in
>> anything. People did.
>
> and still will

Yes ;)

But only for fun, these days. The amount of assembler in a kernel or a
run-time library is small now.

> a development board pre made would be a good prospect. let any analog

That doesn't get me a pipelined VAX built LSTTL chips.

>> Finite state machines are not hard, as long as we can stay digital.
>
> the tools to automatically enter/draw these are expensive.

No. Icarus verilog is free, Xilinx ISE is gratis. Switch/case statements
can be written in whatever. I can also get graphviz to draw the graphs
for me.

> pipeline significantly can add some complexity. (must not forget the
> delays)

Only if you want it to go really fast or you have loops in the
data/control flow -- or you actually want to build the electronics
yourself, in which case there be Analogue Monfters.

Or if you are size-constrained.

>>> Here we differ - compared to other architectures, x86 was hard work at both
>>> hardware design and assembler level. The cleanest from Intel was the pre x86
>>> 8080,
>>
>> What?!?! How many different voltages did you need to supply to it? And
>> in what order did they have to turned on and off? And you are telling me
>> that it was easier to interface to it than the 8086/8088?
>
> long live the simplyfied 68K

Which had an asynchronous protocol for memory and I/O access. Happilly,
they didn't implement it fully, so one could apparently get away with just
ground /DTACK instead of implementing Motorola's somewhat complicated
scheme. Of course, if you have a custom chip or enough PLAs, then it
doesn't matter.

Please google "DTACK grounded" and read the first few paragraphs of the
first newsletter.

>> Software-wise, the 8086/8088 had a nice and simple mechanism for expanding
>> the address space. The 8080, Z80, and PDP-11 had bank-switching. Guess
>> what I prefer?
>
> the segment register method?

Yep. That meant you could have "large" (up to 64K) arrays begin (almost)
anywhere in memory without having to worry about segment crossing. Also,
the bank switches had to programmed explicitly, outside of the
instructions that loaded from/stored to the memory, whereas in the 8086 it
was implicit in the load/store (as one of the four segment registers).

The Z8000 seems to have gotten the implied segment thing right (it used
register pairs where the upper register contained the segment number) but
not the thing about segment crossings and array placements. Z8000
segments were entire non-overlapping blocks of 64K, numbered from 0 to
127.

8086/8088 got both right. The lower maximum memory size vs the Z8000 (1M
vs 8M) didn't matter nearly as much. The Z8000 saved an addition in the
memory generation path but that was probably a bad decision.

>> It also had better 16-bit support than the 8080, which /also/ had the
>> somewhat stiff bindings between some registers and some instructions.
>
> un avoidable with short opcodes.

My point was that the 8086/8088 didn't introduce that stiffness; it was
already there in the 8080.


[TMS 9900, no "real" registers]

> cache eliminates this problem. a cache line as a work space?

I am not sure that would have been enough -- but neither you or I have
studied the instruction set and its addressing modes well enough to say.

> the 16 bit "problem?" would suit embedded world, and doubling all word
> sizes would be a suitable 32 bit upgrade.

Would it? How much extra performance would it gain and how much ease of
addressability? At what cost in terms of wider data paths, wider buses,
extra transistors, lower instruction densities?

I don't know. I'm not so sure. Chris' lament was that there were other
CPUs which had broken the 16-bit addressing barrier and done it better
than the 8086/8088. As far as I can tell the TMS 9900 hadn't broken it
but maybe the TMS 99000 did? My counterpoint is that the 8086/8088
actually did it in a quite nice but vastly underappreciated way.

> if your thinking "overflow is not backwards compatable for branch
> counting" then your assembly skills need some adaptation.

My English reading comprehension skills apparently do.

-Peter
From: Del Cecchi on

"Peter "Firefly" Lund" <firefly(a)diku.dk> wrote in message
news:Pine.LNX.4.61.0701042214010.22558(a)ask.diku.dk...
> On Thu, 4 Jan 2007, jacko wrote:
>
>>> No, not really. I'm just saying that people can write systems code
>>> in
>>> anything. People did.
>>
>> and still will
>
> Yes ;)
>
> But only for fun, these days. The amount of assembler in a kernel or a
> run-time library is small now.
>
>> a development board pre made would be a good prospect. let any analog
>
> That doesn't get me a pipelined VAX built LSTTL chips.
>
>>> Finite state machines are not hard, as long as we can stay digital.
>>
>> the tools to automatically enter/draw these are expensive.
>
> No. Icarus verilog is free, Xilinx ISE is gratis. Switch/case
> statements can be written in whatever. I can also get graphviz to draw
> the graphs for me.
>
>> pipeline significantly can add some complexity. (must not forget the
>> delays)
>
> Only if you want it to go really fast or you have loops in the
> data/control flow -- or you actually want to build the electronics
> yourself, in which case there be Analogue Monfters.
>
> Or if you are size-constrained.
>
>>>> Here we differ - compared to other architectures, x86 was hard work
>>>> at both
>>>> hardware design and assembler level. The cleanest from Intel was the
>>>> pre x86
>>>> 8080,
>>>
>>> What?!?! How many different voltages did you need to supply to it?
>>> And
>>> in what order did they have to turned on and off? And you are
>>> telling me
>>> that it was easier to interface to it than the 8086/8088?
>>
>> long live the simplyfied 68K
>
> Which had an asynchronous protocol for memory and I/O access.
> Happilly, they didn't implement it fully, so one could apparently get
> away with just ground /DTACK instead of implementing Motorola's
> somewhat complicated scheme. Of course, if you have a custom chip or
> enough PLAs, then it doesn't matter.
>
> Please google "DTACK grounded" and read the first few paragraphs of the
> first newsletter.
>
>>> Software-wise, the 8086/8088 had a nice and simple mechanism for
>>> expanding
>>> the address space. The 8080, Z80, and PDP-11 had bank-switching.
>>> Guess
>>> what I prefer?
>>
>> the segment register method?
>
> Yep. That meant you could have "large" (up to 64K) arrays begin
> (almost) anywhere in memory without having to worry about segment
> crossing. Also, the bank switches had to programmed explicitly,
> outside of the instructions that loaded from/stored to the memory,
> whereas in the 8086 it was implicit in the load/store (as one of the
> four segment registers).
>
> The Z8000 seems to have gotten the implied segment thing right (it used
> register pairs where the upper register contained the segment number)
> but not the thing about segment crossings and array placements. Z8000
> segments were entire non-overlapping blocks of 64K, numbered from 0 to
> 127.
>
> 8086/8088 got both right. The lower maximum memory size vs the Z8000
> (1M vs 8M) didn't matter nearly as much. The Z8000 saved an addition
> in the memory generation path but that was probably a bad decision.
>
>>> It also had better 16-bit support than the 8080, which /also/ had the
>>> somewhat stiff bindings between some registers and some instructions.
>>
>> un avoidable with short opcodes.
>
> My point was that the 8086/8088 didn't introduce that stiffness; it was
> already there in the 8080.
>
>
> [TMS 9900, no "real" registers]
>
>> cache eliminates this problem. a cache line as a work space?
>
> I am not sure that would have been enough -- but neither you or I have
> studied the instruction set and its addressing modes well enough to
> say.
>
>> the 16 bit "problem?" would suit embedded world, and doubling all word
>> sizes would be a suitable 32 bit upgrade.
>
> Would it? How much extra performance would it gain and how much ease
> of addressability? At what cost in terms of wider data paths, wider
> buses, extra transistors, lower instruction densities?
>
> I don't know. I'm not so sure. Chris' lament was that there were
> other CPUs which had broken the 16-bit addressing barrier and done it
> better than the 8086/8088. As far as I can tell the TMS 9900 hadn't
> broken it but maybe the TMS 99000 did? My counterpoint is that the
> 8086/8088 actually did it in a quite nice but vastly underappreciated
> way.
>
>> if your thinking "overflow is not backwards compatable for branch
>> counting" then your assembly skills need some adaptation.
>
> My English reading comprehension skills apparently do.
>
> -Peter

You are going to build a VAX out of LSTTL just like back in the day?

del


First  |  Prev  |  Next  |  Last
Pages: 1 2 3 4 5 6 7 8 9 10 11 12 13 14
Prev: Software vs Hardware
Next: Searching for the PDP-3