From: Dennis M. O'Connor on
<ranjit_mathews(a)yahoo.com> wrote ...
> Dennis M. O'Connor wrote:
>> <ranjit_mathews(a)yahoo.com> wrote ...
>> > No, since for a given design, if you know ops/GHz, you can estimate
>> > what the ops would be at (say) 50% more GHz.
>>
>> No, you can't, unless you can somehow make all
>> the other components in the system go faster too.
>
> Naturally. Would you expect Gene Amdahl or someone like him to build a
> new machine with higher compute performance but the same I/O? He would
> scale up everything unless the new machine is targeted at a different
> problem.

You can't scale up the speed of light.
Propagation delays matter, on-chip and off.

You don't seem to know much about
computer architecture or physics.

I more strongly suspect you are a troll.
--
Dennis M. O'Connor dmoc(a)primenet.com


From: ranjit_mathews@yahoo.com on
Nick Maclaren wrote:
> On most CPUs, it isn't as simple as 50% more GHz is 50% more operations
> per cycle, even in artificial codes that don't access memory or do I/O.

How about codes that do access memory but not I/O and are memory
limited - i.e., limited by how fast loads and stores can go?

From: Ketil Malde on
"ranjit_mathews(a)yahoo.com" <ranjit_mathews(a)yahoo.com> writes:

>>>> Spec/GHz is very nearly totally meaningless.

> No, since for a given design, if you know ops/GHz, you can estimate
> what the ops would be at (say) 50% more GHz.

The obvious counterargument is that if you just know ops, you can
estimate the ops at 50% more GHz. Adding the 'per GHz' part of ops is
unnecessary clutter.

While I don't disagree with the barrage of responses to Ranjit, I
don't find them terribly convincing, either. I think that *if* you
managed to scale every component, the performance would scale
accordingly.

The problem is that in the process, you introduce perhaps a new memory
subsystem, different cache sizes, and in the mean time, your
competitor introduces a new micro-architecture with entirely different
trade-offs. Typically adressing *bottlenecks* in the system (probably
ranked by price/performance), rather than scaling all parts equally.

Be that as it may, it would be interesting to see how well this
scaling holds in practice. There is a lot of data out there, from
SPEC to Quake frame rates - why not search for similar, but scaled up
systems and correlate cpu speed to benchmark scores?

-k
--
If I haven't seen further, it is by standing in the footprints of giants
From: Jan Vorbrüggen on
> Why yes, I can. c has been constant for billions of years, and there's
> no evidence at all that it will increase any time soon.

There is some discussion among cosmologists that the fundamental constants
including c may have changes somewhat on the scale of the age of the universe.
How much, and whether it is (at least potentially) measurable, I dunno.

Jan
From: Tim Bradshaw on
On 2006-10-11 04:19:24 +0100, "Del Cecchi"
<delcecchiofthenorth(a)gmail.com> said:

> It is indeed sad to see architects contemplate physical reality. :-)
> Sending multiple GHz signals multiple feet is pretty common these days.

Well, yes. Multiple GHz signals have been sent for many miles for a
long time...

> So having a foot of wire adds about 2ns of latency. So far as I know,
> no components have stagnated because of this.

The issue I was trying to make is that 2ns (say) is 9 cycles of a 3GHz
processor. So there is latency, and more to the point as you make
things faster there is more latency, and ultimately there is nothing
you can do about that without changing the design (physically shrinking
it, adding cache, doing the MTA trick, or something else). So things
don't just scale with clock speed. Surely that's uncontroversial?

(I was going to add that of course no one builds machines where main
memory is a foot away from the processor any more. But they do - for
big multi-board SMP machines I'm sure most of the memory is at least a
foot away from any given processor, & probably more like 2 feet.)

--tim