From: James Van Buskirk on
"Terje Mathisen" <"terje.mathisen at tmsw.no"> wrote in message
news:l1esg7-34e1.ln1(a)ntp.tmsw.no...

> By how many percent would your code beat them for the 256 and 2048 element
> IMDCT?

That would require my writing code for that transform in the first
place. In operation counts, perhaps identical because I haven't
come up with any new algorithm since the one that I published that
still holds the minimum for power of 2 FFTs (unless someone else
has beaten me subsequently), and since that algorithm can be
considered to be built on DCTs...

Looking at their timing numbers for smaller power of 2 FFTs it seems
to me that FFTW doesn't utilize the 2 way SIMD capabilities of at
least a core 2 duo effectively. I am pretty much ignorant of the
style of project you are working on: is it single-precision, double-
precision or integer data? I'm really not very interested in this
stuff any more; I'm trying to make progress in completely different
projects.

--
write(*,*) transfer((/17.392111325966148d0,6.5794487871554595D-85, &
6.0134700243160014d-154/),(/'x'/)); end


From: Ir. Hj. Othman bin Hj. Ahmad on
On Jul 5, 1:02 am, Robert Myers <rbmyers...(a)gmail.com> wrote:
> I am struck by the success of a relatively clueless strategy in computer
> microarchitecture:
>
> If it worked more than once, it will probably work again.  The only
> possible improvement is to learn to anticipate exceptions.
>
> I'd call that bottom up prediction.
>
> Most top-down prediction schemes (I think I understand the computer and
> the program and I'll tell the computer what to expect) have been
> relative failures (most notoriously: Itanium).
>
> The top-down approach has been a similar disappointment in artificial
> intelligence:  I understand logic and I'll teach the computer how to do
> what I think I know how to do.
>
> If your instinct at this point is to dash off a post that I'm simply
> over-generalizing, save yourself the time and stop reading here.
>
> Most biological control systems and especially central nervous systems
> have severe and apparently insurmountable latency problems.  In such a
> situation, the only possible strategy is to learn to anticipate.  One
> justification for watching sports closely (especially basketball, where
> anticipation occurs in less than the blink of an eye) is to observe just
> how good biological systems are at anticipation.
>
> First, let me bow obsequiously to the computer architects and those who
> support them who have just kept trying things until they found things
> that actually worked.  Thomas Edison would be proud.
>
> My if-it-worked-more-than-once-it-will-probably-work-again machinery
> thinks it sees a pattern:
>
> 1. Anticipation is central to (almost) all cognitive and computational
> processes.
>
> 2. Anticipation cannot be anticipated.  It must be learned.
>
> 3. The agenda of computer architecture, which is far from dead, is to
> keep working the if-it-worked-more-than-once-it-will-probably-work-again
> angle harder and harder.  That is, after all, practically all the human
> central nervous system knows how to do (I think).
>
> If all of this seems patently obvious, feel free to say so.  It has only
> slowly dawned on me.
>
> Robert.

I agree completely with you. It is not so obvious. It may be common
sense for a lot of cases, but it takes time and intellectual effort to
relate to specific circumstances.

For example, I am obsessed with predictions since more than 20 years
ago and yet it was only a few months ago that I think I may be able to
apply prediction techniques to cache designs. It may not be
commercially viable at the moment, but in academic fields, it should
be viable but in reality it may not be. Peer reviewers may stop you if
you deviate too much from standard practices.

Yes, I also support you for the bottom-up approach because I think
this is the only viable way, and yet this is not universally applied
in computer architectures.

I agree that a lot of knowledge is useful in predictions, but I am
trying very hard to prove that it is possible to predict with the
least amount of knowledge.