From: Quadibloc on
On Apr 26, 7:03 am, Bernd Paysan <bernd.pay...(a)gmx.de> wrote:

> I remember that the HP Fortran compiler compiled a hand-optimized matrix
> multiplication whenever it found something resembling a matrix
> multiplication (more than 15 years ago), and I'm quite ok with that
> approach.

Well, I'm not. Not because it's cheating on benchmarks. But because it
should only replace Fortran code with a routine that performs a matrix
multiplication if, in fact, what it found *really is* a matrix
multiplication.

If it replaces code that just, in its estimation, vaguely resembles a
matrix multiplication, then it is entirely possible that it is
replacing code that is, in fact, not intended to perform a
conventional matrix multiplication - hence causing the program to
produce results other than those intended.

Of course, I suspect the error is in your choice of words, and not in
the design of the compiler. I hope.

John Savard
From: Morten Reistad on
In article <Z-Odnb-0T6q1GEjWnZ2dnUVZ8g-dnZ2d(a)giganews.com>,
<kenney(a)cix.compulink.co.uk> wrote:
>In article <hr2cvu$5hs$1(a)smaug.linux.pwf.cam.ac.uk>, nmm1(a)cam.ac.uk ()
>wrote:
>
>> Virtually no modern software (in compiled) form will survive
>> two changes of operating system version number, and a great
>> deal won't survive one.
>
> Well the two programs I use most Ameol (an off line reader) and Clarris
>Works Five work fine on XP. I know Ameol will work on Windows 7.
>Microsoft can be accused of a lot of things but they did keep their
>program compatibility promises.

Microsoft has indeed delivered on the upwards compatible promise,
way beyond what I would have expected possible a decade ago.

It seems to have trapped them. They are still stuck in many of
the same problems that plagued W2K.

But for the rest of systems he is right.

And rebuilds get ever more complicated with literally hundreds
of cross-dependencies.

-- mrr

From: Robert Myers on
On Apr 25, 10:15 pm, MitchAlsup <MitchAl...(a)aol.com> wrote:

>
> Perhaps along with the notion of the "Memory Wall" and the "Power
> Wall" we have (or are about to) run into the "Multi-Processing" Wall.
> That is, we think we understand the problem of getting applications
> and their necessary data and disk structures parallel-enough and
> distributed-enough. And we remain are under the impression that we
> "espression limited" in applying our techniques to the machines that
> have been built; but in reality we are limited by something entirely
> more fundamental, and one we do not yet grasp or cannot yet enumerate.
>

A misbegotten imaginary generalization of the Turing machine is at the
root of all this, along with a misbegotten conception of how
intelligence works.

One of these days, we'll recognize a Turing machine as an interesting
first step, but ultimately a dead end. Along with it, we'll
eventually grasp that the entire notion of "programming" is a very
limiting concept. Eventually, the idea of a "programmer", as we now
conceive it, will seem preposterously dated and strange.

Nature has evolved very sophisticated ways of coding the design for an
organism that will interact with an environment with certain expected
characteristics to evolve into a very sophisticated mature organism
that it is hard to believe arose from such compact code--and it
didn't. It evolved from that compact code through interaction with an
appropriate environment, from which it "learned."

The learning is a deck stacked to produce some outcomes and not
others. Development problems and environmental hazards of every
conceivable kind get in the way. If the organism can adapt, it
survives and perhaps even flourishes and reproduces. In the process,
the extremely compact code is subtly modified to improve performance
or to cope with new environmental challenges. Organisms that do not
survive to reproduce make no contribution to the gene pool.

One did not need Aristotle, Euclid, or Newton (and emphatically not
Turing) to get to Darwin. One does not even need to be as persnickity
about detail as most programmers fancy themselves to be. Natural
systems work the details themselves.

One of these days, we'll figure out how to mimic that magic. So long
as universities and research labs are filled to overflowing with
people flogging a dead horse, we'll just get variations on the same
old thing.

Does any of this have to do with hardware? I think it does. So long
as processes are so limited and clumsy in the way they communicate,
we'll wind up with machines that are at best an outer product of
Turing machines. Only when the processors can communicate as much as
they want to and whenever they want to will we have a chances of
mimicking nature.

I can't lay out a blueprint, but I can identify some key
characteristics:

1. The ability to interact with some external reservoir of information
and feedback (e.g. scientists).

2. The ability to measure success and failure and to remember
strategies that are successful and to learn to avoid dead ends.

3. The ability to make arbitrary connections between distantly related
activities and experiences ("memories").

4. The ability to adjust behavior based on measures of success or
failure or even ad hoc heuristics.

I'm sure all this has something in common with agendas for artificial
intelligence and chess-playing programs, but I am proposing it as a
model for computation in general. No, if DARPA gave me even unlimited
funding, I could not promise how long it would take.

If it's not that radical blueprint, it's something just as radical.
Turing is dead, or at least on life support.

Robert.
From: "Andy "Krazy" Glew" on
On 4/25/2010 10:43 PM, Brett Davis wrote:

> ATI chips already have ~2000 processors, simple scaling over the next
> decade states that the monitor in your iMac a decade from now will
> have 100,000 CPUs. Which means that a desktop server will have a
> million CPUs. One for each 10 pixels on your monitor.

I agree with the gist of what you are saying, but: I am staring at circa 10 megapixels (5 monitors) right now.

I expect/hope that there will be may more pixels a decade from now. [*]

Cheap LCD monitors, and USB display adapters, have been the one thing to make me seriously consider getting a desktop
machine, after a decade of laptops and tablets.

--

[*] perhaps not physical pixels. I hope by 2020 that I may be using a contact lens display with only circa 1M physical
pixels. But the virtual framebuffer, tracking eye motion and deciding what subset to display in the contact lens, will
be much bigger.
From: Quadibloc on
On Apr 26, 11:22 am, Robert Myers <rbmyers...(a)gmail.com> wrote:

> One of these days, we'll figure out how to mimic that magic.

Well, "genetic algorithms" are already used to solve certain types of
problem.

John Savard