From: Morten Reistad on
In article <hr7dh1$em$1(a)smaug.linux.pwf.cam.ac.uk>, <nmm1(a)cam.ac.uk> wrote:
>In article <cf25e322-30f4-4915-947d-aebdbe28eec1(a)d12g2000vbr.googlegroups.com>,
>MitchAlsup <MitchAlsup(a)aol.com> wrote:

>Yes, that's true - for CPU architectures - but I was also thinking
>of programming languages!
>
>The point there is, if the only languages are extremely Von Neumann,
>any approach to a different class of architecture would be stillborn,
>because nobody would be able to use it. That is one of the reasons
>that many of the specialist supercomputers have been essentially
>usuable only from Fortran - it is a far less Von Neumann language
>than C/C++ and their followers.

Also, don't forget the mindset of programmers. I have been herding
programmers since about 1986, and I keep pushing them away from the
absolute, total imperative style that fits c so well, onto more
declarative and reactive programs that would fit simula or smalltalk.

This imperative style makes code almost hopless to parallellise, and
it makes it difficult to port.

It is an uphill battle. There is a gut reaction to layout the detail
procedurally.

The other professions that take to programming as a secondary path
to get their work done take a lot easier to the dampening of imperative
styles.

>The ridiculous thing is that there have been very successful languages
>based on different models in the past, and there are still some of
>them around (the functional ones being an obvious category).

That is one path away from the imperative style. I think we need
the whole toolbox in a language, with all the functional, declarative,
reactive and descriptive tools we can muster.

>>And thus, we may have evolved ourselves in to a corner in which there
>>is no economically sund way out.
>
>Agreed.

We suffer at a much more basic level. We need to fix the process
models programmers work with.

-- mrr
From: Piotr Wyderski on
Quadibloc wrote:

> So I go further: not only don't I "blame" Turing and von Neumann... I
> also don't "blame" everyone else who came later for failing to come up
> with some amazing new revolutionary insight that would transform how
> we think about computing. Because unlike the von Neumann model, this
> new insight would have to involve a way to fundamentally transform
> algorithms from their old paper-and-pencil form.

Every now and then I work on my pet idea of rephrasing algorithms
in the terms of multivariate complex exponential function. The basic idea
is as follows: take a two-dimensional infinite discrete plane. The indexes
of the axis X specify your (potentially infinite) set of variables: X_0,
X_1,... X_n
and the second axis represents time. The values of the vector of functions
f_0, f_1, ..., f_n represent your current system state. The variables are
"in spirit" in {0, 1}, but in fact are extended via arithmetization to C.
A function can be built from the set of the following axioms:

1. every complex constant is a function;
2. every variable X_i is a function;
3. if f and g are functions, then a function is also:
a) g o f;
b) f + g (the usual complex addition);
c) f * g (the usual complex multiplication);
4. exp(f) is a function if f is a function.

From that you can derive subtraction and division by a non-zero
constant in a trivial way. Having that you can restore the predicate
calculus ("->" means translates into):

x and y -> x * y
not x -> 1 - x

and thus:

x or y -> x + y - x * y

My conjecture is that every recursive function can be expressed in
this framework (via the rule 3a), but I haven't succeeded to construct
a reasonable recursion termination condition (for instance, the real-valued
operator less). You can easily arithmetize the "if" statement

r := if c then x else y

where c is a Boolean condition as:

r = (c * x) + (1 - c) * y

so 3a immediately gives you "while" (with by design infinite recursion, but
that
is not a problem, since the 0 * x term cancels out the infinite "tail
call") -- the
question is how does one construct the predicate c. I think that (4) is
crucial,
as it allows to construct sin and cos which have nice periodic properties
-- among other things to be used as universal quantifiers?

The constructible functions are holomorphic, so it would enable the use of
the
19th-century old robust complex analysis toolbox to reason about algorithms
instead of the more modern algebraic approaches. With that you can compute
partial derivatives of e.g. a sorting algorithm, its gradient, perform
perturbation
calculus etc. A question remains: what for? -- and I don't know the answer
yet. :-)

Best regards
Piotr Wyderski










From: Robert Myers on
On Apr 28, 5:15 am, "Piotr Wyderski"
<piotr.wyder...(a)mothers.against.spam.gmail.com> wrote:
>
> The constructible functions are holomorphic, so it would enable the use of
> the
> 19th-century old robust complex analysis toolbox to reason about algorithms
> instead of the more modern algebraic approaches. With that you can compute
> partial derivatives of e.g. a sorting algorithm, its gradient, perform
> perturbation
> calculus etc. A question remains: what for? -- and I don't know the answer
> yet. :-)
>

Norman Levinson (of Levinson's theorem and some of the sharper
estimates for the zeros of the Riemann zeta function) paused at the
blackboard to say something like, "Many careers have been built on
little more than taking a standard problem into the complex plane."

Robert.
From: Anton Ertl on
Rick Jones <rick.jones2(a)hp.com> writes:
>Anton Ertl <anton(a)mips.complang.tuwien.ac.at> wrote:
>> A more frequent situation in practice is probably when the compiler
>> does not know what will happen at run-time; but that's irrelevant,
>> because it does not happen with SPEC CPU, because that uses profile
>> feedback.
>
>SPECcpu2006 explicitly disallows PBO in base and only allows it in
>peak. That was a change from SPECcpu2000, which allowed PBO in both.

Good; thanks for that update. And the IA-64 Cint and CFP results I
see on the SPEC website contain both baseline and "result" (peak)
columns; also good.

Of course, SPEC CPU is still designed in other aspects in ways that
make it unrepresentative for much of the computer use I (and, I think,
many others) experience, and therefore make it not very suitable (at
least without complementing benchmarks) for the purposes that it is
often used for: as an indicator of computer performance; and for
directing compiler optimization development. Some things that come to
mind are that process creation and dynamic linking overhead is much
more important in my usage than in SPEC CPU.

- anton
--
M. Anton Ertl Some things have to be seen to be believed
anton(a)mips.complang.tuwien.ac.at Most things have to be believed to be seen
http://www.complang.tuwien.ac.at/anton/home.html
From: Anton Ertl on
"nedbrek" <nedbrek(a)yahoo.com> writes:
>Hello all,
>
>"Anton Ertl" <anton(a)mips.complang.tuwien.ac.at> wrote in message
>news:2010Apr23.153819(a)mips.complang.tuwien.ac.at...
>> "nedbrek" <nedbrek(a)yahoo.com> writes:
>>>2) chk.a is too expensive. You suffer a branch mispredict penalty, plus
>>>you
>>>probably miss in the L1I (recovery code is rarely used, therefore rarely
>>>cached).
>>
>> If the recovery code is rarely used, why is the branch mispredicted?
>> And why do you suffer a miss in the I-cache? In the usual case the
>> branch prediction will be correct (no mispredict penalty), and the
>> recovery code will not be performed (no I-cache miss).
>
>The code is going to look like:
>ld r1 = a
>add = r1
>sub = r1
>...
>chk.a r1, fixup
>...
><a long way away>
>fixup:
>ld r1 = a
>add = r1
>sub = r1
>jmp back
>
>The chk is the branch (if r1 has been invalidated, jump to a section which
>will redo the dependent instructions). If the load is always invalid, the
>branch can be predicted correctly - but then you always do the work twice ->
>lower performance.
>
>If the load is infrequently invalidated, you probably can't predict it ->
>branch mispredict and I cache miss.

Sure you can predict it: You predict that the branch is usually not
taken. In the usual case, there will be no misprediction and no
I-cache miss. In the infrequent case, you will get a misprediction
and an I-cache miss, and you do the work twice; but fortunately, this
case is infrequent.

>> Yes, SPEC CPU is everything that counts. The applications that use
>> dynamic linking and that have to build in finite time, and are not
>> written in the subset of C that's supported by the whole-program
>> analyser (which is not used anyway because the rebuilds take too
>> long), i.e., most of the real-world applications, they are irrelevant.
>
>Sure, this is Itanium. Spec was all we had for a long time (most of the
>architecture committee used 9-queens), and we were glad to have it.

Is 9-queens the n-queens puzzle with n=9? Which architecture
committee?

Anyway, I would have expected the IA-64 designers to have lots of code
for HP/UX, MPE and various OSs for IA-32 available. IIRC HP claimed
to have analysed huge amounts of HP/UX and MPE code when designing
PA-RISC. Did they really limit themselves to SPEC CPU for IA-64?

- anton
--
M. Anton Ertl Some things have to be seen to be believed
anton(a)mips.complang.tuwien.ac.at Most things have to be believed to be seen
http://www.complang.tuwien.ac.at/anton/home.html