Prev: Effects of Memory Latency and Bandwidth on Supercomputer,Application Performance
Next: What will Microsoft use its ARM license for?
From: Ken Hagan on 26 Jul 2010 07:28
On Mon, 26 Jul 2010 07:49:24 +0100, Brett Davis <ggtgp(a)yahoo.com> wrote:
> ... dictionary based AI, the last remaining approach
> to emulating the human mind, as all the other approaches have
> failed. (Think ELisa with a terra-byte database.)
That would be "last remaining that I've thought of", with a strong
implication that it has survived this long simply because the other
failures were tried first.
> But ultimately this is a kludge to get the same results that
> the human mind does, but the human mind is massively parallel
> and soft plugboard wired up between neurons.
I think we can be pretty certain that the human mind is not a *soft*
plugboard on the sort of timescales that it solves intellectual problems.
On the question of its parallelism, I'll wait until someone comes up with
a plausible model for how it works. (Come to that, it doesn't make much
sense to take lessons in computer architecture from the brain either, for
the same reason.)
> So what problem is it that you want this functionality for?
> Fringe designs and apps are interesting mental exercises, fun.
Simulating almost any natural process, as Robert said.
Picking up on his "Stanford PhD" remark...
In the physical sciences, there is an unstated assumption amongst
examiners that unless a question includes a calculation it lacks "rigour".
(Those of you who set exam questions may wish to contradict me on this
point, but it was certainly true when/where I did my degree.)
Nearly all of the calculations in undergraduate physics concern linear
systems, because non-linear ones can't be done. (Well, perhaps they can,
but probably not many and not by undergraduates.)
So students spend lots of time solving linear problems during their course
and graduate if they pass an exam that is mostly filled with linear
If those students stay with their subject for long enough, reality will
kick in and they will slowly come to appreciate that linear approximations
are just that. But how long does this process take? Has the "Stanford PhD"
spent enough time in serious research to unlearn their undergraduate
And on a more frivolous note, would I be correct in saying that the domain
of applicability of a linear approximate shrinks as you perform its
calculations to ever greater precision? If so, the current trend in
super-computing may be to contract down to a singularity of usefulness. :)