From: already5chosen on

Andrew Reilly wrote:
> On Mon, 02 Oct 2006 17:17:23 -0700, already5chosen wrote:
>
> > FPGA development - synthesis, place and route, timing
> > analysis. All these tasks are 100% CPU-bound and single-threaded.
>
> Are those tasks inherently single-threaded, or is that just the way your
> tools vendor coded them? I would have expected synthesis to have about
> the same opportunities for parallelism as other compilers: essentially
> what parallel make can give you. Place and route might be parallelisable,
> if they operate in an iterative try-multiple options minimisation style.
> Don't know about timing analysis. There's lots of independent stuff going
> on in most FPGAs, though, so I'd think that there's ample opportunity to
> do that in parallel too.
>


I am stating the facts. Don't pretend to know the reasons. You can post
your questions on comp.arch.fpga.
BTW, HDL synthesis is not similar to normal software build. It is
similar to software build with link-time code generation +
inter-procedure optimization.

> Multi-processors have been available in the higher-end CAD workstation
> arena for a long time. I would have thought that the code would be using
> them, by now.
>
> Cheers,
>
> --
> Andrew

May be, in other CAD/CAE areas. Not for FPGA development.

From: already5chosen on

Del Cecchi wrote:
>
> Well, if you count work applications there are many. SPICE. DRC/LVS,
> extraction, simulation.......
>
> That's why server farms were invented.
>
> del
>

How many of those are both
1. Efficiently parallelizable
2. But not embarrassingly parallel
Because for embarrassingly-parallel case multicore is no better than
SMP, except for the price and both multicore and SMP often no better
than distributed computation (clusters, MPP).

From: jsavard on
Jon Forrest wrote:
> But, if this too doesn't do much to improve performance, what's
> the point? Remember the Myth of Sisyphus?

It depends on the application. The benefits of throwing more processors
at an application can range from none whatever to an increase in
performance linear with the number of processors used.

Multi-core *does* have a ceiling too, determined by chip yield. After
that, though, you can put more processor chips on the board. (Microsoft
licensing policies happen to be warping chip design at the moment, it
might be noted.)

The idea is to do as much as is reasonable in each direction - and
going multicore is one direction to go in. Putting memory on the chip
to ease the bandwidth bottleneck is an alternative possibility.

John Savard

From: jsavard on
Eugene Miya wrote:
> In article <efgr7e$6oa$1(a)gemini.csx.cam.ac.uk>,
> Nick Maclaren <nmm1(a)cus.cam.ac.uk> wrote:

> >When are we going to see them, then?

> We? "What do you mean 'we?' white man?" --Tonto
> I've seen them. I'm under an NDA.

That doesn't count.

Of course, it still means they *exist*. But when is the world going to
see them is a legitimate question.

John Savard

From: Nick Maclaren on

In article <pan.2006.10.02.23.58.47.886220(a)areilly.bpc-users.org>,
Andrew Reilly <andrew-newspost(a)areilly.bpc-users.org> writes:
|> On Mon, 02 Oct 2006 17:17:23 -0700, already5chosen wrote:
|>
|> > FPGA development - synthesis, place and route, timing
|> > analysis. All these tasks are 100% CPU-bound and single-threaded.
|>
|> Are those tasks inherently single-threaded, or is that just the way your
|> tools vendor coded them? I would have expected synthesis to have about
|> the same opportunities for parallelism as other compilers: essentially
|> what parallel make can give you. Place and route might be parallelisable,
|> if they operate in an iterative try-multiple options minimisation style.

The early 1970s experience has never been contradicted - for many or
most applications, just doing that gives a small and very inefficient
use of parallelism (e.g. the efficiency is often only log(N)/N where
N is the number of CPUs). That experience debunked the claims of the
functional programming brigade that such methodology gave automatic
parallelisation.

I should be interested to hear of any real experiments where parallel
make gives a better result, and to look at the make structure. My
experience and that of many other people is that it fits the above
model to a T.

40 years experience in this area can be summed up as TANSTAAFL.

|> Multi-processors have been available in the higher-end CAD workstation
|> arena for a long time. I would have thought that the code would be using
|> them, by now.

It's actually quite hard to add to programs that weren't designed for
it and aren't naturally embarrassingly parallel.


Regards,
Nick Maclaren.