From: jacko on
> If we are only limited by physics, a lot is possible...

At least one hopes so.

> Can you summarize the problem space here?
> 1) Amount of data - fixed (SPEC), or grows with performance (TPC)
> 2) Style of access - you mentioned this some, regular (not random) but not
> really suitable for sequential (or cache line) structures.  Is it sparse
> array?  Linked lists?  What percentage is pointers vs. FMAC inputs?
> 3) How branchy is it?

1) As an example finite element modeling of large systems. 4D => x, y,
z, t. Say heatflow patterns in a multilayer silicon device.
2) Start with domething like a matrix 2D determinant or inversion.
Often sparse. Link lists are better at sparce data. Very few pointers,
mainly double floats.
3) Suprisingly not that branchy, but very loopy.

Cheers Jacko
From: Robert Myers on
On Jul 21, 10:58 am, "David L. Craig" <dlc....(a)gmail.com> wrote:
> On Jul 20, 7:11 pm, Robert Myers <rbmyers...(a)gmail.com> wrote:
>
> > Maybe quantum entanglement is the answer to moving data around.
>
> Sigh...  I wonder how many decades we are from that being standard in
> COTS hardware (assuming the global underpins of R&D hold up that
> long).  Probably more than I've got (unless the medical R&D also grows
> by leaps and bounds and society deems me worthy of being kept around).
>
> I like simultaneous backup 180 degrees around the planet and on the
> Moon, that's for sure.

There are several boundaries to this problem, which is a mixture of
electrical engineering, applied physics, computer architecture, device
electronics, computational mathematics, and applied physics (and some
I've probably left out).

The applied physics boundary is, without a doubt, both the most
potentially interesting and the leakiest with respect to wild
speculation.

The actual history of the transistor extends to well before my
lifetime, and the full history of the transition from flaky and
limited understanding to commodity devices is instructive.

I don't intend to pursue the applied physics boundary myself because I
don't really know enough even to engage in knowledgeable speculation,
but, if you intend to go outside the box, you always run the risk of
intellectual trajectories that head off into the cosmos, never to
return. Since most of the people who have expressed an interest so
far are considerably more down to earth than that, I don't expect the
conversation to run much risk of becoming dominated by possibilities
that no one currently knows how to manufacture, but I'm willing to run
the risk.

Robert.
From: jacko on
On 21 July, 18:17, Robert Myers <rbmyers...(a)gmail.com> wrote:
> On Jul 21, 10:58 am, "David L. Craig" <dlc....(a)gmail.com> wrote:
>
> > On Jul 20, 7:11 pm, Robert Myers <rbmyers...(a)gmail.com> wrote:
>
> > > Maybe quantum entanglement is the answer to moving data around.
>
> > Sigh...  I wonder how many decades we are from that being standard in
> > COTS hardware (assuming the global underpins of R&D hold up that
> > long).  Probably more than I've got (unless the medical R&D also grows
> > by leaps and bounds and society deems me worthy of being kept around).

> I don't intend to pursue the applied physics boundary myself because I
> don't really know enough even to engage in knowledgeable speculation,
> but, if you intend to go outside the box, you always run the risk of
> intellectual trajectories that head off into the cosmos,  never to
> return.  Since most of the people who have expressed an interest so
> far are considerably more down to earth than that, I don't expect the
> conversation to run much risk of becoming dominated by possibilities
> that no one currently knows how to manufacture, but I'm willing to run
> the risk.

And I thought you were going to make the quantum memory joke ... Well
it allows us to store all values in all addresses, and it performs all
possible calculations in under 1 clock cycle.
From: Robert Myers on
jacko wrote:
>> If we are only limited by physics, a lot is possible...
>
> At least one hopes so.
>
>> Can you summarize the problem space here?
>> 1) Amount of data - fixed (SPEC), or grows with performance (TPC)
>> 2) Style of access - you mentioned this some, regular (not random) but not
>> really suitable for sequential (or cache line) structures. Is it sparse
>> array? Linked lists? What percentage is pointers vs. FMAC inputs?
>> 3) How branchy is it?
>
> 1) As an example finite element modeling of large systems. 4D => x, y,
> z, t. Say heatflow patterns in a multilayer silicon device.
> 2) Start with domething like a matrix 2D determinant or inversion.
> Often sparse. Link lists are better at sparce data. Very few pointers,
> mainly double floats.
> 3) Suprisingly not that branchy, but very loopy.
>

That's an example of the kind of problem that (in my perception) has
come to dominate computational physics and, at the macro level (a
warehouse full of processors with some kind of wires connecting them) is
reasonably well-served by current "supercomputers."

There is still plenty left to be done in what I would call
boundary-dominated computational physics, but, in the problems I'm
concerned about, I doubt very much if the free (boundary-free) field is
being calculated correctly. It would be as if you were trying to do
scattering in a Born approximation and didn't even get the zeroth term
(the incident plane wave) right.

complex geometries -> linked lists, sparse, irregular matrices.

non-linear free field -> dense matrices that lend themselves to clever
manipulation and that, in many cases, can be diagonalized at relatively
low cost.

The actual problem -> accurate representation of a nonlinear free field
+ non-trivial geometry == bureaucrats apparently prefer to pretend that
the problem doesn't exist, or at least not to scrutinize too closely
what's behind the plausible-looking pictures that come out.

Robert.
From: jacko on
On 21 July, 19:13, Robert Myers <rbmyers...(a)gmail.com> wrote:
> jacko wrote:
> >> If we are only limited by physics, a lot is possible...
>
> > At least one hopes so.
>
> >> Can you summarize the problem space here?
> >> 1) Amount of data - fixed (SPEC), or grows with performance (TPC)
> >> 2) Style of access - you mentioned this some, regular (not random) but not
> >> really suitable for sequential (or cache line) structures.  Is it sparse
> >> array?  Linked lists?  What percentage is pointers vs. FMAC inputs?
> >> 3) How branchy is it?
>
> > 1) As an example finite element modeling of large systems. 4D => x, y,
> > z, t. Say heatflow patterns in a multilayer silicon device.
> > 2) Start with domething like a matrix 2D determinant or inversion.
> > Often sparse. Link lists are better at sparce data. Very few pointers,
> > mainly double floats.
> > 3) Suprisingly not that branchy, but very loopy.
>
> That's an example of the kind of problem that (in my perception) has
> come to dominate computational physics and, at the macro level (a
> warehouse full of processors with some kind of wires connecting them) is
> reasonably well-served by current "supercomputers."
>
> There is still plenty left to be done in what I would call
> boundary-dominated computational physics, but, in the problems I'm
> concerned about, I doubt very much if the free (boundary-free) field is
> being calculated correctly.  It would be as if you were trying to do
> scattering in a Born approximation and didn't even get the zeroth term
> (the incident plane wave) right.
>
> complex geometries -> linked lists, sparse, irregular matrices.
>
> non-linear free field -> dense matrices that lend themselves to clever
> manipulation and that, in many cases, can be diagonalized at relatively
> low cost.
>
> The actual problem -> accurate representation of a nonlinear free field
> + non-trivial geometry == bureaucrats apparently prefer to pretend that
> the problem doesn't exist, or at least not to scrutinize too closely
> what's behind the plausible-looking pictures that come out.
>
> Robert.- Hide quoted text -
>
> - Show quoted text -

Umm, I think a need for upto cubic fields is resonable in modelling.
Certain effects do not show in the quadratic or linear approximations.
This can be done by tripling the variable count, and lots more
computation, but surely there must be ways.

Quartic modelling may not serve that much of an extra purpose, as a
cusp catastrophy is within the cubic. Mapping the field to x, and
performing an inverse map to find applied force can linearize certain
problems.