From: glen herrmannsfeldt on
Colin Watters <boss(a)qomputing.com> wrote:

> "Friedrich" <friedrich.schwartz(a)wahoo.with.a.y.com> wrote in message
> news:ckur2618tag7buk10c8lngqpr2k4n94uec(a)4ax.com...
(snip)

>> So, in hope this may be the place, where people have hands on
>> experience with both, I was wondering if someone could explain
>> what is the difference between OpenMP and MPI ?
(snip)

> I think the "essential" difference is between the "shared-memory, multiple
> threads in one process" model that OpenMP provides, and the "multiple
> process, each with private memory" model of MPI.

Since it hasn't been mentioned yet, I will suggest another choice:

A language specifically designed around the problems of parallel
computation, what seems to be called Implicit Parallelism.
(See the wikipedia paga of that name.) That page mentions about
nine different languages, some of which are well known interpreted
languages. It also mentions ZPL, a compiled language designed
for parallel compuatation.

In the ZPL case, one describes the computation needed, while
the compiler figures out how best to divide that computation
among the available processors. As I understand it the usual
target for the ZPL compiler is MPI code, but that might change
as the technology changes.

As discussed on the linked page "Explicit Parallelism," the
advantage of explicit parallel processing is

"the absolute programmer control over the parallel execution.
A skilled programmer takes advantage of explicit parallelism
to produce very efficient code. However, programming with
explicit parallelism is often difficult, especially for
non computing specialists, because of the extra work involved..."

I suppose Coarray Fortran should be added to the list in
the Explicit parallelism page. In any case, one should
balance the work required with the potential gain.

Now, if one wanted to design an implicitly parallel language
that had the look of Fortran, that might not be a bad thing.
(It seems to me that ZPL has the look of Pascal.)

-- glen
From: Richard Maine on
glen herrmannsfeldt <gah(a)ugcs.caltech.edu> wrote:

> A language specifically designed around the problems of parallel
> computation,

Designing a language around a particular architecture feature seems to
me like a mistake in most cases. I suppose there can be appropriate
targets for such a thing, but the concept all but defines itself as
aiming at a niche rather than being general purpose. Niche markets do
exist.

I didn't bother to go research the details. I'm just reacting to the
above phrase as it stands. If that phrase is not an accurate
representation of something, then my comment would not apply.

In particular, there is a huge difference between designing a language
around something and designing a language that accomodates something.
That kind of difference came up in adding some of the object oriented
features in f2003. There was at least one person pushing for what I
would categorize as redesigning the Fortran language around object
oriented programming. (The proposer might disagree with the
categorization, but that's the way it seemed to me). Instead, some
object oriented features were added into the Fortran language in a way
that fit with the language. To me, the difference was fundamental. The
f2003 language is not designed around object orientation. You can
program in f2003 without caring a thing about object orientation unless
you happen to want to.

--
Richard Maine | Good judgment comes from experience;
email: last name at domain . net | experience comes from bad judgment.
domain: summertriangle | -- Mark Twain
From: Tim Prince on
On 7/2/2010 2:12 PM, Friedrich wrote:
> On Fri, 02 Jul 2010 12:18:54 -0400, Tim Prince<tprince(a)myrealbox.com>
> wrote:
>
>> On 7/2/2010 10:47 AM, Friedrich wrote:
>>> I was wondering if someone could explain
>>> what is the difference between OpenMP and MPI ?
>>>
>>
>>> for a Fortran programmer who wishes one day to enter the world of
>>> paralellism (just learning the basics of OpenMP now), what is the more
>>> future-proof way to go ?
>>>
>> "hybrid" (OpenMP as the threading model for MPI_THREAD_FUNNELED) has been
>> in use for at least a decade, and continues to gain ground.
>
>
> Tim, thanks for your answer.
>
> Unfortunatelly, I'm not sure what you ment (not sure whether there is
> a part missing maybe here) - could you perhaps elaborate a little on
> your meaning.
>
> Friedrich
For the foreseeable future, both OpenMP and MPI programming skills will
be valuable. You may wish to start with OpenMP; it gives more return
for a given level of effort, when it is sufficient to support shared
memory (single node) execution. Many of the parallel programming
concepts apply to both.

--
Tim Prince
From: Gib Bogle on
Colin Watters wrote:
> "Friedrich" <friedrich.schwartz(a)wahoo.with.a.y.com> wrote in message
> news:ckur2618tag7buk10c8lngqpr2k4n94uec(a)4ax.com...
>> Beliavsky a few minutes ago posted a link to one an interesting
>> article,
>> which reminded me of something that has been wondering my mind
>> for a while, but I had nowhere to ask.
>>
>> So, in hope this may be the place, where people have hands on
>> experience with both, I was wondering if someone could explain
>> what is the difference between OpenMP and MPI ?
>>
>> I've read the Wikipedia articles in whole, understood them in
>> segments, but am still pondering;
>> for a Fortran programmer who wishes one day to enter the world of
>> paralellism (just learning the basics of OpenMP now), what is the more
>> future-proof way to go ?
>>
>>
>> I would be grateful on all your comments,
>> Friedrich
>
> Caveat: I have no actual hands-on experience of either of these, just hopes
> and wishes. However...
>
> I think the "essential" difference is between the "shared-memory, multiple
> threads in one process" model that OpenMP provides, and the "multiple
> process, each with private memory" model of MPI.
>
> OpenMP's single process means little or no communications overhead. But it
> doesn't scale well after about 8 processors. Depending on the problem of
> course.
>
> MPI's multiple processes demand Inter Process Communication (IPC), which is
> a big learning curve, and a lot of work up front. But once the IPC is in
> place, MPI can be made to scale much better than OpenMP.
>
> OpenMP is well suited to the recent crop of multi-core PCs, but is (little
> or) no help with clusters.
>
> And of course an awful lot depends on the problem you are trying to
> simulate/solve.
>
> Anyone with actual experience care to comment?
>

I first developed my program (an agent-based simulator to model immune cell
interactions) on OpenMP, then was tempted to rewrite it to use MPI.
Unfortunately, I discovered that in my case the amount of data that needed to be
transferred across the processor boundaries more than cancelled out the speed
advantage of using multiple processors. This was on a quad-core machine, i.e.
with very fast communications, but even so the bus capacity constraint was
binding. I moved back to OpenMP.

It really is a matter of horses for courses. If data transfers are either
infrequent or involve only small quantities MPI clearly has the advantage for
problems that can be split between a large number of processors. In my case
OpenMP is the way to go.
From: glen herrmannsfeldt on
Richard Maine <nospam(a)see.signature> wrote:
> glen herrmannsfeldt <gah(a)ugcs.caltech.edu> wrote:

>> A language specifically designed around the problems of parallel
>> computation,

> Designing a language around a particular architecture feature seems to
> me like a mistake in most cases. I suppose there can be appropriate
> targets for such a thing, but the concept all but defines itself as
> aiming at a niche rather than being general purpose. Niche markets do
> exist.

Well, I suppose programming for parallel architectures has
been a niche part of programming for many years. It may not
be able to stay that way, though.

> I didn't bother to go research the details. I'm just reacting to the
> above phrase as it stands. If that phrase is not an accurate
> representation of something, then my comment would not apply.

That was just my thought when writing that, and may or may not apply.
It does seem that multicore processors and processor arrays are
getting more and more popular, and that the need for programming
such is increasing.

> In particular, there is a huge difference between designing a language
> around something and designing a language that accomodates something.
> That kind of difference came up in adding some of the object oriented
> features in f2003. There was at least one person pushing for what I
> would categorize as redesigning the Fortran language around object
> oriented programming. (The proposer might disagree with the
> categorization, but that's the way it seemed to me).

In that case, I see the distinction. The name C/C++ is often
used, with the assumption that the combination is a single language.
(Most likely be people who don't use either one.) It might be
interesting to have Fortran++, or whatever one might want to
call a new language. Even so, one can write object oriented
programs even in Fortran 66. (There is a graphics package that
I used in the Fortran 66 days that was object oriented, though I
hadn't known that at the time.) You can also write (mostly)
non-object-oriented code in C++ or Java. (You likely need some
object-oriented I/O for Java, but the rest of a program could be
pretty much non-OO.)

> Instead, some
> object oriented features were added into the Fortran language in a way
> that fit with the language. To me, the difference was fundamental. The
> f2003 language is not designed around object orientation. You can
> program in f2003 without caring a thing about object orientation unless
> you happen to want to.

And, it seems, some parallel programming features were also added
along the way. As with OO, adding some features does not obviate
the demand for object-oriented languages. It seems that both
languages designed around, and accomodating, parallel programming
could be useful. In the case of shared-memory machines, one can
almost program sequentially, with some operations completing
faster than they otherwise would. It isn't so easy in the
case of message passing, where minimizing the amount of data
to be transfered is very important. (Insert reference to
Amdahl's law here.)

In any case, I just wanted to bring it into the discussion.
People can decide for their own problems what the best
solution is.

-- glen