From: Ken Hagan on
On Sat, 06 Feb 2010 22:56:22 -0000, Bernd Paysan <bernd.paysan(a)gmx.de>
wrote:

> Obviously depends on the big O() of the problem. If your problem is
> O(n�), then you can do a thousand times more science now ;-).

I'd have thought 4 was the baseline even for unreasonably well-behaved
systems, but I'll concede that you might not have a superscript-4 on your
keyboard.

Whilst I'm here, I can't resist noting that the high-energy physics
theorists seem to be making life even harder for themselves. They seem to
have spent the last fifty years just adding new dimensions without much to
show for the effort. As a result, they can now only do about six and a
half times more science. (Except that *none* of their problems are
unreasonably well-behaved. Quite the opposite, as far as I can gather,
since the *other* thing they've spent half a century doing is trying to
balance divergent sums against each other in the hope of one day getting a
finite answer.)
From: Peter Dickerson on

"Ken Hagan" <K.Hagan(a)thermoteknix.com> wrote in message
news:op.u7s0i0qxss38k4(a)khagan.ttx...
> On Sat, 06 Feb 2010 22:56:22 -0000, Bernd Paysan <bernd.paysan(a)gmx.de>
> wrote:
>
>> Obviously depends on the big O() of the problem. If your problem is
>> O(n�), then you can do a thousand times more science now ;-).
>
> I'd have thought 4 was the baseline even for unreasonably well-behaved
> systems, but I'll concede that you might not have a superscript-4 on your
> keyboard.
>
> Whilst I'm here, I can't resist noting that the high-energy physics
> theorists seem to be making life even harder for themselves. They seem to
> have spent the last fifty years just adding new dimensions without much to
> show for the effort. As a result, they can now only do about six and a

I think that that would be string theorists. IFAIK none as remotely got near
to calculating anything (in the numerical sense), except that the spin of
the graviton should be 2 - and that only needs pen and paper.

> half times more science. (Except that *none* of their problems are
> unreasonably well-behaved. Quite the opposite, as far as I can gather,
> since the *other* thing they've spent half a century doing is trying to
> balance divergent sums against each other in the hope of one day getting a
> finite answer.)

Yes, this is a different set of problems. I thought that much of this had
been formally sorted out with renormalisation-group theory.

At the moment the computers might be better kept busy calculating safe
currents in superconducting magnets!

Peter


From: Eugene Miya on
In article <68d77b2d-8f18-431a-9fe1-01c94e2cdf49(a)28g2000vbf.googlegroups.com>,
Robert Myers <rbmyersusa(a)gmail.com> wrote:
>> >Martin Gardner's Annotated Alice since I was attending a well-
>> >known trade school in the Cambridge that Nick doesn't work in.

>On Feb 4, 4:33=A0pm, eug...(a)cse.ucsc.edu (Eugene Miya) wrote:
>> Visit the other well know older competitor to Cambridge. =A0That's where
>> Carroll/Dodson made up and wrote down the story (scenes in 2nd appear).
>>
>Having walked the same well-worn terrazzo as Wiener and Shannon
>without grasping the history at the time is sin enough for a
>lifetime. Oxford makes prime ministers. Let them have at it.
>Worshiping at Oxford is like worshiping at the little red schoolhouse
>up the river. They have enough worshipers without this factory hand.

Yes, I know, Bill Clinton didn't inhale there.
A friend teaches and does research at All Souls.

One must not forget that Edmund Halley was also once there.
It's not all about ministers.

Give them credit, they have a shark crashing through a guys house.


>> >That's an argument I'll buy. =A0If these "supercomputers" are really
>> >throughput machines, then it's a whole different story. =A0I doubt very
>> >seriously, though, that the Congress realizes that it is buying the
>> >equivalent of tens of thousands of well-managed personal workstations.
>>
>> 1) Congress doesn't recognize depreciation.
>> 2) ... first Beowulf.
>> 3) Development sites differ from production throughput sites.
>>
>I'm not particularly arguing for Beowulf clusters. I've said my bit
But that is part of the problem.
>about the meaninglessness of hugeness (other than for bureaucratic and
>institutional sales and promotion) several times. If I thought more
>than a handful got it, I'd stop repeating myself. If more than a
>handful got it, the institutions that can afford these things would
>stop taking us for chumps.

Centers got on the tread mill.
Now it is time for them to run.....

>> >> >Computers have fulfilled their promise only in the area of computer
>> >> >graphics and animation.
>>
>> >> I'm not certain what promise you are thinking.
>>
>> >Everyone has their own perspective. =A0I've described an important part
>> >of mine in a different post.
>>
>> I will repeat:
>> What promise is that? =A0Who said it?
>>
>The original language was compelling, but, since it was a private
>conversation, I can't repeat it exactly as I remember it. What it
>came down to is that computers would eventually be able to produce
>even a very complicated street scenes involving humans. We're not
>there yet, but we're getting closer all the time. The people who
>actually owned the Cray said it.

I can't read your mind.
Very little graphics is done on Crays. The people in show biz are
generally too cheap. Rendering and ray tracing are merely parts of a
scene issue in the graphics pipeline.

My friends who work at Pixar on their clusters have reached the point
that a given cluster is only good enough for 1 film. While they have
not completely depreciated, they work better assembling a new cluster
for each film and hope that input cost is recouped by the new film.

Computers are much more than graphics.

>> >I don't think anyone has enough insight to have a clear view.
>> >all we can say is that we don't even know a path.
>> "I think funding AI is stupidity."
>> One path known is simulate all the physics (down to the chemistry and
>> quanta).
>>
>That will happen just as soon as computers can do realistic Reynolds
>numbers by direct simulation, which is to say, never, without some
>unforeseeable breakthrough in technology.

Or fundamental science.

So is that reason to stop?

No.

>> >> Many fields
>> >> (e.g., cryptanalysis, machine translation, etc.)
>> >> don't have graphical products.

>I think that everyone understands the potential of brute force in
>cryptanalysis.

I doubt that.
We have the impressive feat of Joe Rochefort
who didn't know his opponent's language.

> I don't know what to think of machine translation.

That's OK, the people who attempt it don't either.
It's largely touted by mono-linguist analysts.
They leave all kinds of cultural things off.

Search on ALPAC report.

>Arabic language skills seem to be plenty in demand.
>Brute force doesn't *always* not work--just most of the time.

We are the 90% discipline of optimists.
In our world (Western Judeo-Christian) we persecute those who chose to
study opposing opinions/subjects.
If it appears good enough by some measure, we are doing OK.

But likely not enough if Abu Ghraib is cited as an example.
We want to have cute hand held things. I'll skip Douglas Adams too cute
fish analogy.

--

Looking for an H-912 (container).

From: Robert Myers on
On Feb 8, 4:32 pm, eug...(a)cse.ucsc.edu (Eugene Miya) wrote:
> In article <68d77b2d-8f18-431a-9fe1-01c94e2cd...(a)28g2000vbf.googlegroups.com>,
> Robert Myers  <rbmyers...(a)gmail.com> wrote:
>
[Eugene Miya wrote]:
>
> Computers are much more than graphics.
>
Maybe, but I'm not sure that graphics aren't their only truly credible
product:

http://www.guardian.co.uk/technology/2010/feb/05/science-climate-emails-code-release

The fact that computers make numbers doesn't mean you should believe
them. On the other hand, I don't think that graphics that are the
result of deeply flawed computation are necessarily useless. They
help us to visualize and to imagine, even if, in the end, they aren't
very good at what they are supposedly good at, which is computing with
known accuracy.

> >> One path known is simulate all the physics (down to the chemistry and
> >> quanta).
>
> >That will happen just as soon as computers can do realistic Reynolds
> >numbers by direct simulation, which is to say, never, without some
> >unforeseeable breakthrough in technology.
>
> Or fundamental science.
>
> So is that reason to stop?
>
> No.
>
If we didn't have all the floundering in AI, we wouldn't be able to
see the profound limitations of what once were thought to be powerful
approaches. Science and technology needs its failures as well as its
successes.

On the other hand, I think it's fair to ask "Did this work out the way
we thought it would?"

Analysis of the behavior of complex systems: no
Risk assessment: no
Weather prediction: depends on how much of a dupe you ever were.
Strongly non-linear systems in general: no.
Secure management of sensitive data: no.
Artificial intelligence: no.
Astrophysics: computer-generated artist's conceptions.

None of this is reason to *stop* anything, but it is good reason to
ask if we haven't led ourselves off on a path that may well have
simply made us less smart and more credulous.

Robert.
From: Gavin Scott on
Eugene Miya <eugene(a)cse.ucsc.edu> wrote:
> My friends who work at Pixar on their clusters have reached the point
> that a given cluster is only good enough for 1 film. While they have
> not completely depreciated, they work better assembling a new cluster
> for each film and hope that input cost is recouped by the new film.

As far as I know it has always been this way and I doubt there's any
application that depreciates hardware faster. 3D has an insatiable
appetite for computing power.

Some of the numbers for Avatar are fairly impressive. 10TB average
amount of assets produced per day, 34,000 CPU cores, >100TB RAM,
47 hours (of something) per frame, 50 billion polygon scenes, etc.
(numbers from memory from the January 3D World magazine, and IIRC).

Maybe not so impressive relative to what Eugene's friends run in the
"fluid dynamics" area, but pretty cool none the less I think. And if
they could have afforded 10x or 100x of any or all of those parameters
then they would have had no trouble putting that power to use.

The Cray era of graphics (where the machine was needed to do the actual
rendering rather than running a simulation of some sort) was VERY
short. I think only one or two were ever really sold into that market
and I think their purchasers were hoping to amortize the cost over
significantly more years than turned out to be realistic.

> Computers are much more than graphics.

Indeed but haven't things like PC 3D gaming and 3D content creation
been responsible for a lot of the market force that has driven
innovation both in CPUs as well as 3D hardware that has lead now
to GP-GPU computing? I'd hate to imagine a world where Visicalc was
the most powerful application anyone needed to run on a personal
computer :)

G.