From: Kjetil Svalastog Matheussen on


On Tue, 9 Jan 2007, Kjetil Svalastog Matheussen wrote:

>
>
> On Mon, 8 Jan 2007, Ben wrote:
>
> >
> > BTW: Has anyone done any hard real time work using Lisp? How'd it go?

I also think I read something about the lisp guys at austin/texas
university using rscheme to controle robots. Rscheme was once hard real
time capable.


From: Tim Bradshaw on
Kirk Sluder wrote:

> The high-performance processors are already moving to quad core.

Well, of course, the high-end processors shipped for the server market
will always be somewhat ahead of those for desktops, as there's a
commercially significant number of people willing to pay thousands of
dollars per part there, while the market for desktops that expensive is
really very small indeed (not many people buy $5-10k desktops any
more). It takes a year or two for parts to get cheap enough to make it
into the desktop market (and they're often lower spec in various ways -
less cache per core etc - to get the price down further).

But actually I read a review of a quad core Intel CPU which was clearly
aimed at (high-end) desktops just the other day (though it was some
horrible two-chips-in-one-package thing, so not actually quad core in
any real sense at all, merely a pair of very densely packed dual-core
CPUs). Outside of x86, 8 core CPUs have been shipping in significant
numbers for a while.

> None of the chip makers appear to have much interest in simplifying
> the market in that way.

The point I was making is that the same basic implementation
architecture is what will ship into both markets, not that identical
parts will. And it's not a matter of "simplifying the market", it's a
matter of it being far too expensive for processor vendors to develop
entirely independent lines for markets which are so closely related.

--tim

From: Tim Bradshaw on
Kirk Sluder wrote:

>
> Well, just speaking as an Mac user, the benefits of multi-core CPUs
> for desktop applications beyond simulating ecosystems has been noted
> by consumers.

I suspect you are confusing several issues here.

- The first multicore (in a single socket macs were the intel boxes, so
there are multiple near-simultaneous performance changes tangled up -
significantly higher single-core performance, probably significantly
better compilation technology, and multiple cores per die.

- it is far, far easier to find something for a 2 core box (whether
multiple cores per socket or multiple sockets) to do than to find
something for an 8, 16, 32 core box to do.

- It easy to design a 2 core system in such a way that it has
reasonable performance - you've been able to get 2 core systems (in the
form of 2 socket systems) from box-shifters like Dell (and actually
Apple I think - when did they start shipping 2 socket PPC machines?)
for a very long time, but no one who isn't doing serious design has
been shipping systems with more than 4 cores. This will change as it
becomes cheap to put large numbers of cores in a box.

- The consumer market is notoriously bad at making judgements about
performance and what influences it. No that's wrong: the 10s-to-100s
of k per box market is notoriously bad at making judgements about
performance and what influences it; the consumer market is just even
worse.

> In addition to consumer/professional applications that
> are designed for multiple processors (such as photoshop) doing just
> about anything in a modern operating system involves multiple threads
> competing for CPU time.

Well, they're competing for something, for sure. That something is
probably generally memory access rather than actually CPU time.
Unfortunately there are almost no tools available which show how much
time a processor spends stalled waiting for memory rather than doing
anything useful - certainly all the standard tools show that time as
the processor being busy. You can deal with this with a single core if
it's multithreaded (by which I mean that the core itself selects which
thread to execute on each clock cycle, based typically on whether
memory accesses have completed). Such multithreaded cores have existed
for a while - the Tera MTA was a well-known example in the HPC arena,
and Sun's Niagara also has multiple threads per core (4, with I think 8
or 16 in the next generation design). Unfortunately I suspect you need
OS & compiler support to take advantage of these systems.

> I will agree that price and heat considerations are nice bonuses.
> But most consumers are not that interested in heat provided that
> they don't get burns from their laptop.

However they do care about things like battery life, noise, and system
cost which correlate quite well with power consumption. And they
*will* care about power consumption (even the Americans) when the
systems start costing significantly more than their purchase cost to
run for a year.

--tim

From: pTymN on
I work in the video games industry, and I think that multicore
processors are going to kill the PPU (physics processing unit) cards
that Aegia is trying to release. For the foreseeable future, more
realistic collision detection and particle based physics will happily
consume as many processors as we can throw at the problem. It will not
be cheap to add interactive fluids to a game, and this is one problem
that requires fairly random memory access, so GPUs won't be as useful.

I work on Gamebryo, and we recently parallelized our physics and
collision libraries. Triangle mesh to triangle mesh collisions are
computationally expensive and can be done in parallel.

From: Tim Bradshaw on
pTymN wrote:

> I work in the video games industry, and I think that multicore
> processors are going to kill the PPU (physics processing unit) cards
> that Aegia is trying to release. For the foreseeable future, more
> realistic collision detection and particle based physics will happily
> consume as many processors as we can throw at the problem.

How good is the locality of these codes? I suspect that one feature of
multicore desktop systems will be that they will be *extremely* short
of memory bandwidth, because that's expensive to provide.

--tim