From: Colin Paul Gloster on
On Thursday 10th June 2010, Randy Brukardt sent via Jacob Sparre
Andersen's netnews transfer protocol server:
|---------------------------------------------------------------------------|
|""Jeffrey R. Carter" <spam.jrcarter.not(a)spam.acm.org> wrote in message |
|news:hur95a$3oh$1(a)tornado.tornevall.net... |
|> Colin Paul Gloster wrote: |
|>> |
|>> Though there should be no difference, in reality compilers produce |
|>> less slow machine code when looping down to zero instead of up from |
|>> zero. |
|> |
|> The root of all evil. |
| |
|Goody, we're playing Jeopardy! I assume that was "Programming" for 200. :-)|
| |
|Q: What is premature optimization?? |
| |
| Randy." |
|---------------------------------------------------------------------------|

Jeffrey R. Carter and Randall L. Brukardt,

If any of you claims that you posted an unclear phrase which had not
been intended to be libelous which looks like you accused me of
inappropriately trying to apply premature optimization, then clarify
or be sued for libel. You had already known that I had made major
improvements in speed to simulations which I depend on in order to
stay alive this year (unfortunately the funding shall end in December
2010 no matter how well I do), and that speeding up just one per cent
results in a valuable improvement of hours.

Nasser M. Abbasi was not reverting to Ada as a complement to
Mathematica with the objective of producing slower software than
Mathematica.

Yours sincerely,
Colin Paul Gloster
From: Brian Drummond on
On Fri, 11 Jun 2010 10:48:37 +0200, "J-P. Rosen" <rosen(a)adalog.fr> wrote:

>Yannick Duch�ne (Hibou57) a �crit :
>> Le Thu, 10 Jun 2010 15:27:38 +0200, J-P. Rosen <rosen(a)adalog.fr> a �crit:
>>> For output, rounding is specified in the standard. And BTW, the standard
>>> does not specify IEEE (fortunately, it did not make the same mistake as
>>> Java!).
>> Which mistake ? (not to disagree, just to understand)
....
>There is an interesting paper on the internet about why Java failed on
>numerics. Sorry, I don't have the exact reference at hand, but it should
>be easy to find.

At a wild guess, easy to find for anyone who's heard of Prof Kahan.

But just in case... I presume you mean
http://www.cs.berkeley.edu/~wkahan/JAVAhurt.pdf

On the other hand, the abstract starts:
"Java�s floating-point arithmetic is blighted by five gratuitous mistakes:"

So perhaps the original question "which mistake?" still stands;
nos 3) and 4) look particularly bad.

- Brian

From: Randy Brukardt on
"Colin Paul Gloster" <Colin_Paul_Gloster(a)ACM.org> wrote in message
news:alpine.LNX.2.00.1006111207170.3608(a)Bluewhite64.example.net...
....
> If any of you claims that you posted an unclear phrase which had not
> been intended to be libelous which looks like you accused me of
> inappropriately trying to apply premature optimization, then clarify
> or be sued for libel.

Don't be so sensitive!

Optimization is premature unless there is a demonstrated need for additional
performance.

In any case, being guilty of premature optimization has almost no reflection
on how good or bad of a programmer you are (something I am not in any
position to judge). The best programmers are still guilty of it from time to
time. I know I've been guilty of it multiple times, and it is useful to have
outsiders point that out, in order that I don't repeat the same mistake on
my next project. And it's very useful to repeat the mantra over and over, in
order to reduce the temptation.

> You had already known that I had made major
> improvements in speed to simulations which I depend on in order to
> stay alive this year (unfortunately the funding shall end in December
> 2010 no matter how well I do), and that speeding up just one per cent
> results in a valuable improvement of hours.

Sure there are cases like that; your work application has a demonstrated
need for more performance. Such situations are rare, however. I spent a lot
of effort optimizing lookup times in our spam filter, only to find out that
it wasn't a signifiant percentage of the filtering effort.

And you have to know that I have been writing optimizing Ada compilers for
the last 30 years (well, 29 years and 9 months to be exact), so I know how
to performance optimize when necessary. But...

> Nasser M. Abbasi was not reverting to Ada as a complement to
> Mathematica with the objective of producing slower software than
> Mathematica.

My understanding was that the OP was comparing the readability and
ease-of-creation of Fortran and Ada. I saw no indication that he was
concerned about the performance. And in many cases, the performance of the
code isn't very relevant. (Remember the 90/10 rule!) In the absence of a
demonstrated need for better performance, making the code less readable is a
bad idea, no matter what effect it has on performance. That's exactly the
definition of premature optimization, in my opinion.

Randy.

P.S. I wouldn't waste time on suing someone. Only the lawyers make money in
most lawsuits.


From: Vincent LAFAGE on
Hi,

I would like to give some comments on your conclusion.
In fact, I have to disagree with this conclusion.

> 1. In Ada, I did not have to change the index of m and k in the
> summation to reflect the 1-off per the definition of DFT.
> DFT starts from 0 to N-1. In Ada, using 'Range and defining the arrays
> to go from 0 .. N-1 solved the problem.

It is a cliche that Fortran's index have to start at 1.
It was already possible in Fortran 77 to start indexes for where you
want it to start.
look for instance at
http://www.physics.nau.edu/~bowman/PHY520/F77tutor/10_arrays.html
real b(0:19), weird(-162:237)

In your case, it would lead to
COMPLEX, dimension (0:N-1) :: X
REAL, dimension (0:N-1) :: data=(/1.0,2.0,3.0/)

In Fortran 90, you do not have the very convenient X'range
but you can use the following statement to keep generality
DO k = lbound (X, 1), ubound (X, 1)

When speaking about Fortran, we should not forget to specify which one,
Fortran 90 being a completely different beast.
As far as we can compare, the writer of Fortran 90 have drawn a lot from
Ada 83.

> 2. In Ada, the compiler complained more times more about types being
> mixed up. I placed float() around the places it complained about.

We can certainly complain about the implicit type promotion of Fortran.
Still modern compiler provides the same safe-guard against the implicit
type promotion of Fortran.
For instance,
$ gfortran-4.3 -Wall -Wsurprising -Wconversion dft.f90 -o dft
will reveal 11 surprising implicit conversion such as
dft.f90:14.29:
COMPLEX, parameter :: J =(0,1)
1
Warning: Conversion from INTEGER(4) to REAL(4) at (1)

So "-Wall" is not the last word as far as warning are concerned.

> 3. It actually took me less time to do the Ada function than the FORTRAN
> one, even though I am equally not familiar with both at this time :)

A 17 Statement Line Of Code example is not really anything close to
realistic example for scaling.
Not only the sample size is small, but what is more, it doesn't scale
linearly, or in the same way.
Besides, you did not tell us how long it took in either case. But that
would be telling... ;)

I am also an Ada enthusiast, but it does not prevent my being a Fortran
enthusiast as well.

Best regards,
Vincent

Le 09/06/2010 12:49, Nasser M. Abbasi a �crit :
> I never used complex variables before in Ada, so for the last 3 hrs I
> was learning how to do it. I wrote a simple program, to find the DFT of
> an array of 3 elements {1,2,3} (DFT=discrete Fourier transform).
>
> The definition of DFT is one equation with summation, here it is, first
> equation you'll see:
>
> http://en.wikipedia.org/wiki/Discrete_Fourier_transform
>
> Since I have not programmed in Ada nor in FORTRAN for a looong time, I
> am very rusty, I program in Mathematica and sometimes in matlab these
> days, but I wanted to try Ada on complex numbers.
>
> I also wrote a FORTRAN equivalent of the small Ada function. Here is
> below the Ada code, and the FORTRAN code. Again, do not scream too much
> if it is not good code, I just learned this now, I am sure this can be
> improved a lot.
>
> And for comparing, I show the Matlab and the Mathematica output just to
> make sure.
>
>
> ====== START ADA ============
> --
> -- dtf.adb, compiled with GNAT 4.3.4 20090804 (release) 1
> -- under CYGWIN 1.7.5
> -- gnatmake dft.adb
> --
> -- ./dft.exe
> -- ( 6.00000E+00, 0.00000E+00)
> -- (-1.50000E+00, 8.66026E-01)
> -- (-1.50000E+00,-8.66025E-01)
> -- $
>
>
> with Ada.Text_IO; use Ada.Text_IO;
> with Ada.Numerics.Complex_Types; use Ada.Numerics.Complex_Types;
>
> with Ada.Numerics; use Ada.Numerics;
>
> with Ada.Numerics.Complex_Elementary_Functions;
> use Ada.Numerics.Complex_Elementary_Functions;
>
> with Ada.Complex_Text_IO; use Ada.Complex_Text_IO;
>
> procedure dft is
> N : positive := 3;
> J : constant complex :=(0.0,1.0); -- SQRT(-1)
> X : array(0 .. N-1) of Complex := (others=>(0.0,0.0));
> data : array(0 .. N-1) of float :=(1.0,2.0,3.0);
>
> begin
> FOR k in X'range LOOP
> FOR m in data'range LOOP
> X(k) := X(k) + data(m) * exp(- J*(2.0*Pi)/float(N) * float(m*k) );
> END LOOP;
> put(X(k)); new_line;
> END LOOP;
>
> end dft;
> ================== END ADA ==============
>
> ======= FORTRAN code ===========
> ! dtf.f90, compiled with GCC 4.3.4
> ! under CYGWIN 1.7.5
> ! gfortran -Wall dft.f90
> ! ./a.exe
> ! ( 6.0000000 , 0.0000000 )
> ! ( -1.4999999 , 0.86602557 )
> ! ( -1.5000005 ,-0.86602497 )
> !
>
> PROGRAM dft
>
> IMPLICIT NONE
>
> INTEGER, PARAMETER :: N = 3
> COMPLEX, parameter :: J =(0,1)
>
> REAL, parameter :: Pi = ACOS(-1.0)
> INTEGER :: k,m
> COMPLEX, dimension(N) :: X
> REAL, dimension(N) :: data=(/1.0,2.0,3.0/)
>
> DO k=1,N
> X(k)=(0,0)
> DO m=1,N
> X(k) = X(k) + data(m) * EXP(-1.0*J*2.0*Pi/N *(m-1)*(k-1) )
> END DO
> print *,X(k)
>
> END DO
>
> END PROGRAM dft
> ==================================
>
> ==== Matlab code ====
> EDU>> fft([1,2,3])'
>
> ans =
>
> 6.0000
> -1.5000 - 0.8660i
> -1.5000 + 0.8660i
> ===============================
>
> === Mathematica ====
> In[5]:= Chop[Fourier[{1, 2, 3}, FourierParameters -> {1, -1}]]
>
> Out[5]= {6., -1.5 + 0.8660254037844386*I, -1.5 - 0.8660254037844386*I}
> =========================
>
> Conclusion:
> I actually liked the Ada implementation more than FORTRAN because:
>
> 1. In Ada, I did not have to change the index of m and k in the
> summation to reflect the 1-off per the definition of DFT.
> DFT starts from 0 to N-1. In Ada, using 'Range and defining the arrays
> to go from 0 .. N-1 solved the problem.
>
> 2. In Ada, the compiler complained more times more about types being
> mixed up. I placed float() around the places it complained about.
>
> 3. It actually took me less time to do the Ada function than the FORTRAN
> one, even though I am equally not familiar with both at this time :)
>
> ok, this was a fun learning exercise
>
>
> --Nasser

From: Nasser M. Abbasi on
On 6/14/2010 2:33 AM, Vincent LAFAGE wrote:

> For instance,
> $ gfortran-4.3 -Wall -Wsurprising -Wconversion dft.f90 -o dft
> will reveal 11 surprising implicit conversion such as
> dft.f90:14.29:
> COMPLEX, parameter :: J =(0,1)
> 1
> Warning: Conversion from INTEGER(4) to REAL(4) at (1)
>
> So "-Wall" is not the last word as far as warning are concerned.
>

Thanks, I did not know about these flags. I am impressed now with FORTRAN.

The f90 compiler, with those flags added, became as annoying, opps, I
mean as picky as the Ada compiler and complained about all the implicit
conversions.

Also, with the use of lbound and ubound in FORTRAN helped make the
logic simpler by avoiding the 1-off problem.

To update, below is the current version of the example in Ada and
FORTRAN. I also made a small page where I kept these for reference.

http://12000.org/my_notes/mma_matlab_control/KERNEL/node94.htm

I think now it seems to me, from this simple example, that Ada and
FORTRAN can be equally good languages for scientific applications.

Thanks for everyone for the suggestions.

============= Ada ====================
--
-- dtf.adb, compiled with GNAT 4.3.4 20090804 (release) 1
-- under CYGWIN 1.7.5
-- $ gnatmake dft.adb
-- gcc -c dft.adb
-- gnatbind -x dft.ali
-- gnatlink dft.ali
-- $ ./dft.exe
--
-- ( 6.00000E+00, 0.00000E+00)
-- (-1.50000E+00, 8.66026E-01)
-- (-1.50000E+00,-8.66025E-01)

with Ada.Text_IO; use Ada.Text_IO;
with Ada.Numerics.Complex_Types; use Ada.Numerics.Complex_Types;

with Ada.Numerics; use Ada.Numerics;

with Ada.Numerics.Complex_Elementary_Functions;
use Ada.Numerics.Complex_Elementary_Functions;

with Ada.Complex_Text_IO; use Ada.Complex_Text_IO;

procedure dft is
N : constant := 3; -- named number, no conversion to Float needed
X : array(0 .. N-1) of Complex := (others=>(0.0,0.0));
data : constant array(0 .. N-1) of float :=(1.0,2.0,3.0);
Two_Pi_Over_N : constant := 2 * Pi / N;
-- named number, outside the loop, like in ARM 3.3.2(9)
begin
FOR k in X'range LOOP
FOR m in data'range LOOP
X(k) := X(k) + data(m)*exp(-J*Two_Pi_Over_N * float(m*k));
END LOOP;
put(X(k)); new_line;
END LOOP;
end dft;

====================== Fortran ======================
! dtf.f90, compiled with GCC 4.3.4
! under CYGWIN 1.7.5
! $ gfortran -Wall -Wsurprising -Wconversion dft.f90
! $ ./a.exe
! ( 6.0000000 , 0.0000000 )
! ( -1.4999999 , 0.86602557 )
! ( -1.5000005 ,-0.86602497 )
! $

PROGRAM dft

IMPLICIT NONE

INTEGER, parameter :: N = 3
COMPLEX, parameter :: J =(0.0,1.0)
REAL, parameter :: Pi = ACOS(-1.0)
INTEGER :: k,m
COMPLEX, dimension(0:N-1) :: X
REAL, dimension(0:N-1) :: data=(/1.0,2.0,3.0/)
REAL, parameter :: Two_Pi_Over_N = 2.0*Pi/real(N)

DO k = lbound(X, 1), ubound(X, 1)
X(k)=(0.0,0.0)
DO m = lbound(data, 1), ubound(data, 1)
X(k) = X(k) + complex(data(m),0.0) &
* EXP(-J*complex(Two_Pi_Over_N*real(m*k),0.0))
END DO
print *,X(k)
END DO

END PROGRAM dft


--Nasser