From: nmm1 on
In article <1jlknxk.1y87czkrrc4okN%see(a)sig.for.address>,
Victor Eijkhout <see(a)sig.for.address> wrote:
>Colin Watters <boss(a)qomputing.com> wrote:
>
>> OpenMP's single process means little or no communications overhead.
>
>OpenMP has thread overhead. Not to mention core affinity problems. The
>communications in MPI can (often) be hidden behind computations.

It's wrong at a more basic level, too. There is a large (and
sometimes VERY large) communications overhead in OpenMP when
transferring data between threads. On some systems, it can
actually be larger than the overhead of an MPI message, though
I have not personally measured that effect. The problem is, of
course, the cache hierarchy.


Regards,
Nick Maclaren.
From: sturlamolden on
On 13 Jul, 21:30, n...(a)cam.ac.uk wrote:

> It's wrong at a more basic level, too.  There is a large (and
> sometimes VERY large) communications overhead in OpenMP when
> transferring data between threads.  On some systems, it can
> actually be larger than the overhead of an MPI message, though
> I have not personally measured that effect.  The problem is, of
> course, the cache hierarchy.

That may very well be true. I have also the impression that multiple
processes tend to perform better than threads. Threads have problems
due to cache synchronization between processors, false sharing, etc.
While threads share the same virtual memory space, they still have to
communicate if they run on different processors. This seems to be
generally forgotten. Many programmer believe as threads share memory,
they need only synchronization and no other communication.

Also, a human who knows the intent can be better at figuring out what
needs to be communicated. Programs using MPI often actively tries to
amortize the communication overhead using information available to the
programmer. With OpenMP the programmer can do very little to control
the amount of communication, except try to avoid sharing of arrays
between threads. But there ends the advantage of MPI.

From my point of view, programming with OpenMP is a substantially
easier burden. The code is written as usual, debugged, verified, and
then !$omp comments are added to aid the compiler. So OpenMP moves a
lot of the nitty gritty details to the compiler. And while OpenMP is
just comments, that can be ignored, code written for MPI will depend
on MPI's commucation API, such as MPI_Send, MPI_Recv, etc. That speaks
strongly in favour of OpenMP. Yes we might loose performance compared
to MPI; but coding is easier; there are less room for mistakes; less
room for deadlocks, livelocks and annoyances that plague MPI
development; bugs are easier to squash; and it just feels right. In
disfavour we e.g. have issues with OpenMP on clusters. OpenMP uses a
"shared memory" model, which is harder to implement on a cluster
architecture. But it has been done too.

From: Victor Eijkhout on
sturlamolden <sturlamolden(a)yahoo.no> wrote:

> OpenMP uses a
> "shared memory" model, which is harder to implement on a cluster
> architecture. But it has been done too.

What are you thinking of?

Victor.
--
Victor Eijkhout -- eijkhout at tacc utexas edu
From: nmm1 on
In article <9d546251-449e-41b2-b55a-5144515cef07(a)q22g2000yqm.googlegroups.com>,
sturlamolden <sturlamolden(a)yahoo.no> wrote:
>
>From my point of view, programming with OpenMP is a substantially
>easier burden. The code is written as usual, debugged, verified, and
>then !$omp comments are added to aid the compiler. So OpenMP moves a
>lot of the nitty gritty details to the compiler. And while OpenMP is
>just comments, that can be ignored, code written for MPI will depend
>on MPI's commucation API, such as MPI_Send, MPI_Recv, etc. That speaks
>strongly in favour of OpenMP.

That's only if you use a very restricted subset of OpenMP. As
soon as you use most of the more advanced features of it, it's no
different from MPI.

And there's a worse problem - you need to handle many of the fancier
features of C/C++ and even Fortran specially for OpenMP, even more
so than for MPI. I/O to or from the standard units is the main
example, but anything with thread-specific state (e.g. IEEE 754
flags) is similar. And, with OpenMP, you don't always control
which thread is used to execute code!

>Yes we might loose performance compared
>to MPI; but coding is easier; there are less room for mistakes; less
>room for deadlocks, livelocks and annoyances that plague MPI
>development; bugs are easier to squash; and it just feels right.

Er, no, sorry. There is MORE room for mistakes, deadlocks, livelocks
and race conditions. Most of the people I know of who have tried
OpenMP have hit a failure caused by those, completely failed to
locate it, and gone back to MPI because it's easier. You either
have a very simple task, or are doing very well.

And that's in Fortran - no sane person wants to get me started on
the (using undefined) interactions between C/C++ and OpenMP.



Regards,
Nick Maclaren.
From: nmm1 on
In article <1jlm3dl.110jatrru0ngcN%see(a)sig.for.address>,
Victor Eijkhout <see(a)sig.for.address> wrote:
>sturlamolden <sturlamolden(a)yahoo.no> wrote:
>
>> OpenMP uses a
>> "shared memory" model, which is harder to implement on a cluster
>> architecture. But it has been done too.
>
>What are you thinking of?

Intel Cluster Tools.


Regards,
Nick Maclaren.