From: ChrisQ on
Robert Myers wrote:
> On Nov 27, 8:55 am, ChrisQ <m...(a)devnull.com> wrote:
>
>> Us envy ?. You are joking, right ?. After the events of the last few
>> years, I would say more to be pitied. Nice place to visit though.
>>
>> If intel are coming to Europe, perhaps it's because they have run out of
>> fine minds to reprogram in the us, or is it just that the us doesn't
>> have them anyway ?...
>>
> Oh, don't take it so personally.
>
> The only hot subject left in computers right now is stealing financial
> information, and the former Warsaw pact holds an insurmountable lead
> in that area.
>
> Unless and until someone decides that thermonuclear warheads are no
> longer important, the UK will be tied to the US and its bloated bomb
> establishment. Don't take it too hard. The special relationship
> gives the UK a place at the table while the US decides which mistake
> it will make next.
>
> What we're talking about here matters only to politicians,
> bureaucrats, and bureaucrat-wannabe's posing as scientists, which is
> "Mine (my computer that is) is bigger than yours."
>
> Robert.
>

Don't worry, it was a generalisation on your part which may even be
partly true, unfortunately :-).

Even scientists fall prey to the fame and w/w syndrome I guess,
especially if large budgets and enough blue sky is waved at them. The
global warming circus being the most visible example at present. The
movie has everything: fame, big budget research, international status,
apocalypse now and more. How could science refuse ?.

Back on topic, the most recent C language thread has been one of the
most interesting on c.a for months, with quite a few profound insights.
There seem to be many deep thought specialists here, but no one really
seems to have a big picture (holistic ?) view of how the parallel
programming problem is going to be solved, all the way from programmer
interface to the bare machine. From what i've read here, computer
architects seem to think that the problem will be solved at the silicon
level, with the rest business as usual, but it's going to need much more
than that.

I keep thinking computing surfaces and yes, the name has been used
before, but visualisation at programmer level will be a key step in the
process...

Regards,

Chris
From: nmm1 on
In article <czYPm.103342$fF2.74077(a)newsfe26.ams2>,
ChrisQ <meru(a)devnull.com> wrote:
>
>Don't worry, it was a generalisation on your part which may even be
>partly true, unfortunately :-).

Though there is a parable about beams and motes that would bear
contemplation by many of the transpondians.

>Back on topic, the most recent C language thread has been one of the
>most interesting on c.a for months, with quite a few profound insights.
>There seem to be many deep thought specialists here, but no one really
>seems to have a big picture (holistic ?) view of how the parallel
>programming problem is going to be solved, all the way from programmer
>interface to the bare machine. From what i've read here, computer
>architects seem to think that the problem will be solved at the silicon
>level, with the rest business as usual, but it's going to need much more
>than that.

Dead right. It demonstrably ISN'T going to be solved in that way;
that's one of the few things that is clear. But I'm quite glad that
we didn't have any loons claiming to have seen the big picture, as
that would imply one of the three things about them:

1) They were overdue for their promotion to subdeity, or perhaps
were a subdeity in mufti.

2) They were so blinkered that they were unaware that their big
picture was just a tiny and unimportant fragment.

3) They were just plain loons, and not even amusingly original
ones, either.


Regards,
Nick Maclaren.
From: Mayan Moudgill on
nmm1(a)cam.ac.uk wrote:
> In article <czYPm.103342$fF2.74077(a)newsfe26.ams2>,
> ChrisQ <meru(a)devnull.com> wrote:
>
>>Don't worry, it was a generalisation on your part which may even be
>>partly true, unfortunately :-).
>
>
> Though there is a parable about beams and motes that would bear
> contemplation by many of the transpondians.
>
>
>>Back on topic, the most recent C language thread has been one of the
>>most interesting on c.a for months, with quite a few profound insights.
>>There seem to be many deep thought specialists here, but no one really
>>seems to have a big picture (holistic ?) view of how the parallel
>>programming problem is going to be solved, all the way from programmer
>>interface to the bare machine. From what i've read here, computer
>>architects seem to think that the problem will be solved at the silicon
>>level, with the rest business as usual, but it's going to need much more
>>than that.
>
>
> Dead right. It demonstrably ISN'T going to be solved in that way;
> that's one of the few things that is clear. But I'm quite glad that
> we didn't have any loons claiming to have seen the big picture, as
> that would imply one of the three things about them:
>
> 1) They were overdue for their promotion to subdeity, or perhaps
> were a subdeity in mufti.
>
> 2) They were so blinkered that they were unaware that their big
> picture was just a tiny and unimportant fragment.
>
> 3) They were just plain loons, and not even amusingly original
> ones, either.
>
>
> Regards,
> Nick Maclaren.

One big problem is that there is no "big picture"; there are an
abundance of them, and they all admit of different solutions.

In many of the cases requiring "parallel" programming, the computation
is not even the driver - disk and network access are, and the
architecture of the processor is mostly irrelevant.

In others, the parallel programming is either trivial (pmake,
monte-carlo based portfolio evaluation) or well-understood (dense matrix
computations).

In yet others, the transforms required to parallelize the program are
problem specific, require domain specific knowledge, and are complicated
enough that the actual expression of the parallelism is a nit.

Yet other problems fit into the class of "we could do it in parallel,
but why bother - it runs fast enough", or it's corollary "its simpler to
optimize the sequential version than to parallelize it".

The question is (excluding graphics), are there enough problems left to
make it worthwhile to persue a systematic approach to parallelization
(i.e. languages, environments, etc.)? Or is it more cost-effective to
work on serial computation for the mainstream, and let the few
applications that need parallelism adopt ad-hoc solutions for it?

I looked at Go and some other proposals for parallel programming
languages, and they just feel wrong. Perhaps its the garbage collection,
or the fact that they are trying to use CSP as a modularization
mechanism, or whatever - but they seem to be trading away run-time
efficiency for programmer productivity. Which leads to the question -
why bother?

Hmmm...here's a thought. If you have to have multiple machines (because
of disk/network issues), and you have compute cycles lying spare, and
you have to run programs across these machines, then maybe it does make
sense to have a parallel programming language which emphasizes
productivity over performance.

But, I suspect, in many cases that this is a technology looking for a
problem to solve. Honestly, when was the last time you came across an
app that needed parallelization (because of performance), but you could
afford to burn cycles to "enhance programmer productivity"?
From: Robert Myers on
On Nov 28, 8:42 pm, Mayan Moudgill <ma...(a)bestweb.net> wrote:

>
> In yet others, the transforms required to parallelize the program are
> problem specific, require domain specific knowledge, and are complicated
> enough that the actual expression of the parallelism is a nit.
>

I have a hard time imagining how you view the world. I've made enough
not-so-subtle comments about the IBM view of the world, so I'm going
to drop that tack, but I assume that your world-view is driven by
whatever priorities are given to you and that you assume that that is
"the world."

I have an even harder time imagining how Nick views the world. He
presumably works in the midst of people like me, and yet, to read his
posts, you'd begin to think that what the world really lacks is enough
grubby details to obsess over endlessly.

The reality is that, for people who don't particularly want to spend
their time obsessing over details, computers have not become easier to
use along with becoming more powerful. Quite the opposite.

The recent thread on c talked as if using gcc were a perfectly
reasonable proposition and as if code written in c really were machine
and, even more important, environment-portable. Every scrap of non-
trivial c code I've touched recently comes with tons of goo.

../configure is great when it works. When it doesn't, you are once
again reduced to worrying about the kinds of details that are central
to why you *didn't* major in EE/CS.

Nick likes Fortran. I'll venture to say that I dislike Fortran much
less than I dislike c, but the entire world of programming is geared
to java/c programming these days, and Fortran *always* seemed like a
step backward from PL/I.

You said that dense matrix programming is in hand. What you really
mean is that, if the builders of big clusters get their boxes to
perform reasonably well on anything, it will be on linpack. As I
commented on another thread recently, the linpack mentality has
resulted in many codes being rewritten so that they don't exploit
massive vector pipes very easily, and massive vector pipes have
suddenly become very inexpensive. If that's what you mean by "in
hand," then I guess things are in hand.

It's no big deal, really. People like me have always worked close to
the iron and I don't see that changing.

It's the snarky, knowing tone that pisses me off.

Robert.

From: nmm1 on
In article <IbGdndmlAckfTIzWnZ2dnUVZ_h6dnZ2d(a)bestweb.net>,
Mayan Moudgill <mayan(a)bestweb.net> wrote:
>
>One big problem is that there is no "big picture"; there are an
>abundance of them, and they all admit of different solutions.

Precisely. Or, in some cases, there is no known solution.

>The question is (excluding graphics), are there enough problems left to
>make it worthwhile to persue a systematic approach to parallelization
>(i.e. languages, environments, etc.)? Or is it more cost-effective to
>work on serial computation for the mainstream, and let the few
>applications that need parallelism adopt ad-hoc solutions for it?

Yes and no. I doubt that a single solution is worth pursuing, but
there are areas and aspects that are.

The problem with working on serial computation is that we have reached
limits that we have no idea how to get round, and believe are absolute,
at least in the medium term. I don't think that's quite true, but the
only approach that I know of that has a hope is programming language
redesign. And I know of nobody working on directions that are likely
to help with performance.


Regards,
Nick Maclaren.
First  |  Prev  |  Next  |  Last
Pages: 1 2 3
Prev: I love USB displays!
Next: PEEEEEEP