From: John Ahlstrom on
ChrisQuayle wrote:
> Peter "Firefly" Lund wrote:
>
>>> So not much time slip between them.
>>
>>
>> As far as I know it was a bit worse than that.
>>
>> The 68K was introduced in September, 1979 and the 8086 on June 8, 1978.
>> I haven't been able to dig up any info on early availability of the
>> 8086 but the 68K seems to have had problems.
>>
>
> If you do a bit more digging, you'l find that the original research that
> led to the 68k was started in 1975, so it's very much in the same time
> frame...
>
> Chris

Certainly one of the most important differences in
the 8086 and 68000 was the idea that the 68000 would
be the first of a long-lived architecture of high-end
microprocessors, while the ideas was that the 8086
would be a quick-to-market space holder till the
real long-lived, high-performance 8800 (8816, 432)
arrived.

Different requirements produced different architectures
and implementations.

JKA
From: ChrisQuayle on
Peter "Firefly" Lund wrote:
> On Sun, 7 Jan 2007, ChrisQuayle wrote:
>
>> would miss pre and post dec/inc operators, as they can save a lot of
>> instructions whan accessing arrays, depending on how you have
>> structured the code.
>
>
> These days, instructions are free but loads and stores are not ;)
>

Instructions are free ?.

Pre or post inc/dec addressing modes typically operate on a machine
register, so no additional load or store from memory is involved. That
is, the pointer is adjusted, not what it points to.

> My guess is that sometimes the epilogue would be entered at the label
> "10$", in which case it will return with C = 1 (and the other flags
> untouched by this code).

Correct. Such constructs were widely used to save instructions / memory
and associated access time. It's basically an economic way of providing
a normal or error return condition for a function. An example of how pre
or post inc/dec addressing modes can be usefull and save a load / store
or two. Note that this is not self modifying code or any other tricky
programming idiom. Nor does it detract from readabilty or program flow.

Chris
From: ChrisQuayle on
Peter "Firefly" Lund wrote:
> On Sun, 7 Jan 2007, ChrisQuayle wrote:
>
>> This is typical of the idiom and saves memory. By the time you have a
>> couple of hundred modules, the overall saving is quite significant.
>>
>> Now of course, all is surfaces, and uSoft Studio rulz ok :-)...
>
>
> Try this:
>
> ftp://ftp.scene.org/pub/mirrors/hornet/code/effects/stars/mwstar.zip
>
> -Peter

Very cool - Have neither the fluency with pc architecture, nor x86 asm
to produce something like that :-)...

Chris
From: ChrisQuayle on
Peter "Firefly" Lund wrote:

>
> No, but I can expect Motorola to make their CPUs more backwards compatible.
>
But are they backwards compatable in terms of instruction execution
stream ?. The exception stack frame format is irrelevant to well written
system software and applications. Why should Motorola guarantee
compatabilty at that level just so some 3rd party vendor can hack a more
powerfull cpu on to an old machine ?.

Just what exactly was it about the stack frame format that caused the
trouble ?. I don't have a 68060 manual here, so a full explanation is in
order, yes ?.

> Compatibility snags was a contributing factor to why Apple was a bit
> slow in introducing new CPUs for the old Macs.
>

A cynic might suggest that it was because Apple used undocumented
features of the cpu or non data sheet hardware operation to get the job
done, in order to save a chip or two.

>
> Where do you need hardware-based interrupt priority handling (outside of
> NMI) that can't be done as well by software + hardware-based interrupt
> blocking?

Ok, let's compare two methods:

With a single level interrupt structure and >1 interrupting devices, you
effectively set the interrupt priority by the order in which device
status registers are polled in software, perhaps polling some or all of
those you are not interested in as well, to determine the interrupt
source. The problem with this is that higher priority devices are denied
access while the lower priority device is being serviced. While you may
be able to re-enable interrupts within the handler to allow a higher
priority device to get service once some initial work is done, the
software overhead to make this work properly can be quite significant.
This sort of defeats the object of having a fast access interrupt
structure in the first place. In engineering terms, it's dog's breakfast
of a solution, though it is cheap.

Ok, your turn - describe a hardware prioritised interrupt structure. to
fill in the second method...

Chris




From: Erik Trulsson on
ChrisQuayle <nospam(a)devnul.co.uk> wrote:
> Peter "Firefly" Lund wrote:
>
>>
>> No, but I can expect Motorola to make their CPUs more backwards compatible.
>>
> But are they backwards compatable in terms of instruction execution
> stream ?. The exception stack frame format is irrelevant to well written
> system software and applications.

Applications would normally not need to know anything about the exception stack
frame format, but system software is another matter.

If the system wants to actually *handle* the exception properly then it will
need to know the format of the stack frame in order to find out things like
*which* instruction caused the exception, or which memory access caused a page
fault.


> Why should Motorola guarantee
> compatabilty at that level just so some 3rd party vendor can hack a more
> powerfull cpu on to an old machine ?.

Not just for that, but also because compatibility at that level could allow
users to run older system software on newer systems.

>
> Just what exactly was it about the stack frame format that caused the
> trouble ?. I don't have a 68060 manual here, so a full explanation is in
> order, yes ?.

The problem was that the stack frame format *changed*. That meant that
you had to get an updated (or at least patched) OS in order to run on
the new CPUs. This was not just a problem for the Mac, but for all other
systems that used th 68k series as well.

>
>> Compatibility snags was a contributing factor to why Apple was a bit
>> slow in introducing new CPUs for the old Macs.
>>
>
> A cynic might suggest that it was because Apple used undocumented
> features of the cpu or non data sheet hardware operation to get the job
> done, in order to save a chip or two.

That cynic would probably be wrong. It was rather the case that Apple depended
on documented features of the earlier models of the 68k series that were
different in the later models.



--
<Insert your favourite quote here.>
Erik Trulsson
ertr1013(a)student.uu.se
First  |  Prev  |  Next  |  Last
Pages: 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
Prev: Software vs Hardware
Next: Searching for the PDP-3