From: ranjit_mathews@yahoo.com on

Nick Maclaren wrote:
> In article <1160641396.087623.67890(a)m73g2000cwd.googlegroups.com>,
> "ranjit_mathews(a)yahoo.com" <ranjit_mathews(a)yahoo.com> writes:
> |> Jon Forrest wrote:
> |> > Today I read that we're going to get quad-core processors
> |> > in 2007, and 80-core processors in 5 years. This has
> |> > got me to wondering where the point of diminishing returns
> |> > is for processor cores.
> |>
> |> > Where do you think the point of diminishing returns might
> |> > be?
> |>
> |> For NUMA optimized parallel codes, far beyond 80. So, a better question
> |> is beyond what core count the number of codes that can use the cores
> |> will fall to an infinitesmally small number.
>
> I am tempted to quote Dr Samuel Johnson on remarriage here, but shall
> refrain.
>
> That applies only to codes that have very small working sets and
> perform very little communication between threads; while there are some
> such, it turns out to be VERY hard to cast more than a VERY few problems
> into that form.

Combinatorial problems are available, that can soak up any number of
processors. Take, for example, planning a route for your vacation,
using a super-sophisticated successor of Microsoft Mappoint.
>
>
> Regards,
> Nick Maclaren.

From: kenney on
In article <egh1e8$3up$1$8300dec7(a)news.demon.co.uk>, tfb(a)tfeb.org
(Tim Bradshaw) wrote:

> but I'm willing to bet
> money it's not higher than c, and I suspect it's a bunch
> lower.

C is the speed of electromagnetic radiation in a vacuum. It is
lower in any medium. It is actually possible to travel faster
than light in some mediums, see Cherenkov Radiation. A lot
depends on the method of transmission as well using wires is a
lot slower than wave guides for example.

However I would have thought that the main limiting factor with
dynamic RAM would be the discharge and charge timing. I was under
the impression that all the clever design work was involved in
trying to maximise the information content of each read and
write. I have the feeling that static RAM is faster than dynamic
hence it's use in cache memory, I suppose it might be possible to
make greater use of that.

Ken Young
From: kenney on
In article <1160536712.441602(a)nnrp1.phx1.gblx.net>,
dmoc(a)primenet.com (Dennis M. O'Connor) wrote:

> You can't scale up the speed of light.
> Propagation delays matter, on-chip and off.

Actually you can, the speed of light depends on the medium it is
travelling in. C is the speed of light in a vacuum and should
only be used for that.

Ken Young
From: Tim Bradshaw on
On 2006-10-12 11:14:10 +0100, kenney(a)cix.compulink.co.uk said:

> Actually you can, the speed of light depends on the medium it is
> travelling in. C is the speed of light in a vacuum and should only be
> used for that.

I'm sure Dennis meant the speed of light in vacuo: I certainly did.

From: Nick Maclaren on

In article <1160646792.020194.135340(a)i42g2000cwa.googlegroups.com>,
"ranjit_mathews(a)yahoo.com" <ranjit_mathews(a)yahoo.com> writes:
|>
|> Combinatorial problems are available, that can soak up any number of
|> processors. Take, for example, planning a route for your vacation,
|> using a super-sophisticated successor of Microsoft Mappoint.

It has always been trivial to soak up an arbitrary amount of computer
time. So what?

Ifr you look at those problems in more depth, you will find that an
infinitesimal number of them justify the use of a full search; in
practice, all that is needed is a near-optimal solution. That is
why many 'insoluble' problems have been solved commercially for over
4 decades.


Regards,
Nick Maclaren.