From: Terje Mathisen "terje.mathisen at on
Andrew Reilly wrote:
> On Thu, 11 Mar 2010 07:38:44 -0600, Del Cecchi` wrote:
>
>> Apparently FFT doesn't let you fake bandwidth or latency.
>
> It depends on how they're written, of course, but FFTs don't necessarily
> care about latency at all: the access/communications pattern might be
> total and intricate, but it is entirely deterministic. Back in the 80's
> my boss made an FFT engine that the CSIRO (and later SETI) used for radio
> astronomy. It used DRAM for all storage, but the compute unit was 100%
> saturated, because the computation program and the memory access program
> were effectively pre-computed (unrolled) and scheduled around the DRAM
> latency and then stored in a ROM (and later that ROM was optimized/
> compressed into a state machine.)
>
> I dare say that the FFT routines that run on the big, distributed supers
> operate in much the same way, or at least they could.

Afaik the best known FFT library, FFTW, run in exactly this manner, with
a big optimization stage between the pattern pre-compute and final schedule.

Smaller FFTs end up totally unrolled, while the big ones consist of a
smaller number of near-optimally scheduled loops.

I believe FFTW also does things like trying multiple alternative
calculation schemas in order to find the best one for the current hardware.

Terje
--
- <Terje.Mathisen at tmsw.no>
"almost all programming can be viewed as an exercise in caching"
From: "Andy "Krazy" Glew" on
Robert Myers wrote:
> On Mar 11, 10:01 am, "Andy \"Krazy\" Glew" <ag-n...(a)patten-glew.net>

>> This is causing me to wonder: are there ay important computations that are still latency sensitive? Or is everything
>> bandwidth sensitive from now on?
>>
> Some operations research calculations are inherently serial and
> therefore latency sensitive. My argument has been that if such
> calculations were all *that* important, you'd see a big market for
> computers with heroic cooling.
>
> If even someone all Wall Street is doing it to gain a few
> milliseconds, I've not heard of it.

I am not sure that I parse this statement, but

I am fairly confident that there are people on Wall Street who are quite latency sensitive - who resort to things such
as (a) locating their servers close to the machines f other companies and the markets they talk to, so that they can
reduce the network latency in getting data between their various machines, performing calculations, and placing orders,
and (b) fund research in low overhead networking in Linux to gain a competitive advantage.

However, they seem to be worrying about latency at the level of network overhead. Not necessarily CPU overhead.

One reason is that these companies are pasting together from several different services accessed across networks, on
machines in several different protection domains. They are not running monolithic applications in a single protection
domain. E.g. machine 1 reads data from several different market sources, sends it to machine 2 to run it through a
model that machine 1's owners can use, but are not allowed to have on their machines, who sends data back to machined 1,
who talks to machine 3, ... who places an order.

Another is that the computations themselves are quite throughput oriented.

(I often talk about the ratio of throughput to latency, e.g. the number of flops of computation you do at each node as
you traverse a linked data structure.)
From: Robert Myers on
On Mar 14, 8:53 pm, "Andy \"Krazy\" Glew" <ag-n...(a)patten-glew.net>
wrote:
> Robert Myers wrote:
> > On Mar 11, 10:01 am, "Andy \"Krazy\" Glew" <ag-n...(a)patten-glew.net>
> >> This is causing me to wonder: are there ay important computations that are still latency sensitive?  Or is everything
> >> bandwidth sensitive from now on?
>
> > Some operations research calculations are inherently serial and
> > therefore latency sensitive.  My argument has been that if such
> > calculations were all *that* important, you'd see a big market for
> > computers with heroic cooling.
>
> > If even someone all Wall Street is doing it to gain a few
> > milliseconds, I've not heard of it.
>
> I am not sure that I parse this statement, but
>

If there are calculations that need faster clocks and the computation
can't benefit from parallelism, and if the calculation has enough
value, then you should be able to justify routine overclockiing with
some form of heroic cooling. If anyone is doing it now other than as
a stunt, I'm not aware of it.

There are lengthy calculations that are inherently serial. Someone,
somewhere must be selling into that market. If they are, I don't know
who they are for anything other than gaming.

Robert.

From: Bernd Paysan on
Ken Hagan wrote:
> By the
> cunning use of a test ban treaty, we will arrive at the end of the
> present century with all sides completely confident that no-one else
> has working weapons, but no-one ever had to pass through a period
> where one side knew that they'd given up theirs but another side still
> had some.

Why do you think you need to test a nuklear bomb if it explodes? The
first few bombs made exploded as predicted, and all that was available
to calculate them was some pencil and paper - the electronic computers
of that time were used to crack encryption (the Manhatten project had
quite a number of computers. Most of them were female, as they were
better with summing up tons of numbers following simple rules - the term
"computer" back then referred to a person doing calculations by hand
;-).

So if you are really worried if your bombs work, just make them big
enough - this stuff is too dangerous to not explode ;-).

--
Bernd Paysan
"If you want it done right, you have to do it yourself"
http://www.jwdt.com/~paysan/
From: William Clodius on
Bernd Paysan <bernd.paysan(a)gmx.de> wrote:

> <snip>
> Why do you think you need to test a nuklear bomb if it explodes? The
> first few bombs made exploded as predicted, and all that was available
> to calculate them was some pencil and paper
They also used quite a few mechanical calcultors and IBM punched card
machines.
<http://www.lanl.gov/history/atomicbomb/computers.shtml>

>- the electronic computers
> of that time were used to crack encryption
The firs non-programmable US machines (Atansoff-Berry) solved linear
equations. The first programmable US machines were initially used for
ballistics tables (ENIAC), but was almost immediately diverted to
implosion calculations. While the use followed the initial versions of
the Fat Man design, I suspect every implosion and fusion design in the
US stockpiles since about 1947 has relied on such calculations.

The Collosus machines for breaking encryptions slightly followed
Attanxoff-Berry, but preceded ENIAC. Electro-mechanical sytems
comparable in capability to the electronic systems (Z3) and Harvard Mark
1) were use for trajectory, hydrodymnamic, and aerodynamic calculations.

> (the Manhatten project had
> quite a number of computers. Most of them were female, as they were
> better with summing up tons of numbers following simple rules - the term
> "computer" back then referred to a person doing calculations by hand
> ;-).
>
> So if you are really worried if your bombs work, just make them big
> enough - this stuff is too dangerous to not explode ;-).
On this I won't comment.

--
Bill Clodius
los the lost and net the pet to email