From: Vincent LAFAGE on
If we want not only compile-time warning but also run time check
-fbounds-check

like in
gfortran -Wall -Wsurprising -Wconversion -fbounds-check dft.f90 -o dft

It can be pretty useful, and close to some of the runtime checks of Ada.

So, if we remember some larger scale results I provided in a former
thread, with Ada and Fortran being equally good for coding numerical
application, Fortran 90 is still twice as fast with complex...

Vincent

PS: on a completely different level, we may use less UPPERCASE, as most
of us are more used to read lower-case or Capitalized.
Particularly for reserved words.
It's less work for our poor programmer's brain.

The same goes for the languages' names as Fortran is no more in
all-uppercase since Fortran 90, and ADA is the American Dental
Association. ;)

Le 14/06/2010 14:29, Nasser M. Abbasi a �crit :
> On 6/14/2010 2:33 AM, Vincent LAFAGE wrote:
>
>> For instance,
>> $ gfortran-4.3 -Wall -Wsurprising -Wconversion dft.f90 -o dft
>> will reveal 11 surprising implicit conversion such as
>> dft.f90:14.29:
>> COMPLEX, parameter :: J =(0,1)
>> 1
>> Warning: Conversion from INTEGER(4) to REAL(4) at (1)
>>
>> So "-Wall" is not the last word as far as warning are concerned.
>>
>
> Thanks, I did not know about these flags. I am impressed now with FORTRAN.
>
> The f90 compiler, with those flags added, became as annoying, opps, I
> mean as picky as the Ada compiler and complained about all the implicit
> conversions.
>
> Also, with the use of lbound and ubound in FORTRAN helped make the logic
> simpler by avoiding the 1-off problem.
>
> To update, below is the current version of the example in Ada and
> FORTRAN. I also made a small page where I kept these for reference.
>
> http://12000.org/my_notes/mma_matlab_control/KERNEL/node94.htm
>
> I think now it seems to me, from this simple example, that Ada and
> FORTRAN can be equally good languages for scientific applications.
>
> Thanks for everyone for the suggestions.
>
> ============= Ada ====================
> --
> -- dtf.adb, compiled with GNAT 4.3.4 20090804 (release) 1
> -- under CYGWIN 1.7.5
> -- $ gnatmake dft.adb
> -- gcc -c dft.adb
> -- gnatbind -x dft.ali
> -- gnatlink dft.ali
> -- $ ./dft.exe
> --
> -- ( 6.00000E+00, 0.00000E+00)
> -- (-1.50000E+00, 8.66026E-01)
> -- (-1.50000E+00,-8.66025E-01)
>
> with Ada.Text_IO; use Ada.Text_IO;
> with Ada.Numerics.Complex_Types; use Ada.Numerics.Complex_Types;
>
> with Ada.Numerics; use Ada.Numerics;
>
> with Ada.Numerics.Complex_Elementary_Functions;
> use Ada.Numerics.Complex_Elementary_Functions;
>
> with Ada.Complex_Text_IO; use Ada.Complex_Text_IO;
>
> procedure dft is
> N : constant := 3; -- named number, no conversion to Float needed
> X : array(0 .. N-1) of Complex := (others=>(0.0,0.0));
> data : constant array(0 .. N-1) of float :=(1.0,2.0,3.0);
> Two_Pi_Over_N : constant := 2 * Pi / N;
> -- named number, outside the loop, like in ARM 3.3.2(9)
> begin
> FOR k in X'range LOOP
> FOR m in data'range LOOP
> X(k) := X(k) + data(m)*exp(-J*Two_Pi_Over_N * float(m*k));
> END LOOP;
> put(X(k)); new_line;
> END LOOP;
> end dft;
>
> ====================== Fortran ======================
> ! dtf.f90, compiled with GCC 4.3.4
> ! under CYGWIN 1.7.5
> ! $ gfortran -Wall -Wsurprising -Wconversion dft.f90
> ! $ ./a.exe
> ! ( 6.0000000 , 0.0000000 )
> ! ( -1.4999999 , 0.86602557 )
> ! ( -1.5000005 ,-0.86602497 )
> ! $
>
> PROGRAM dft
>
> IMPLICIT NONE
>
> INTEGER, parameter :: N = 3
> COMPLEX, parameter :: J =(0.0,1.0)
> REAL, parameter :: Pi = ACOS(-1.0)
> INTEGER :: k,m
> COMPLEX, dimension(0:N-1) :: X
> REAL, dimension(0:N-1) :: data=(/1.0,2.0,3.0/)
> REAL, parameter :: Two_Pi_Over_N = 2.0*Pi/real(N)
>
> DO k = lbound(X, 1), ubound(X, 1)
> X(k)=(0.0,0.0)
> DO m = lbound(data, 1), ubound(data, 1)
> X(k) = X(k) + complex(data(m),0.0) &
> * EXP(-J*complex(Two_Pi_Over_N*real(m*k),0.0))
> END DO
> print *,X(k)
> END DO
>
> END PROGRAM dft
>
>
> --Nasser

From: Colin Paul Gloster on
On June 11th, 2010, Randy Brukardt sent:

|-----------------------------------------------------------------------------|
|""Colin Paul Gloster" <Colin_Paul_Gloster(a)ACM.org> wrote in message |
|news:alpine.LNX.2.00.1006111207170.3608(a)Bluewhite64.example.net... |
|... |
|> If any of you claims that you posted an unclear phrase which had not |
|> been intended to be libelous which looks like you accused me of |
|> inappropriately trying to apply premature optimization, then clarify |
|> or be sued for libel. |
| |
|Don't be so sensitive!" |
|-----------------------------------------------------------------------------|

Okay, I accept that Randy was not being abusive. Mr. Carter is still
to explain himself.

|-----------------------------------------------------------------------------|
|"Optimization is premature unless there is a demonstrated need for additional|
|performance." |
|-----------------------------------------------------------------------------|

I understand what you mean. I might even agree somewhat in general,
but not in this particular case. If the program needs to be sped up,
then fiddling with the loops (not just the ones accounting for 90% of
the running time, ideally it should be possible to change the looping
policy across the entire program by changing a single line of code or
a compiler switch) can have a fair impact (admittedly pretty small in
the grand scheme of things, but still worthwhile). Changing from
looping down to zero to looping up from zero is not going to speed
things up (not counting changes in speed caused by for example a
pseudonumber generator affected by the change in the executable: I
have seen this happen); and it is liable to slow things down. Looping
down to zero would be slightly faster, so why not just do it normally?
After making much more dramatic changes to the code in order to speed
it up, if it is still too slow, then turning around loop iterations'
directions wastes manhours to obtain speed which should have been
obtained by default.

Not that looping down to zero is necessarily the best
solution. Fortress by default uses concurrent array indexing. Not that
Fortress succeeds in its all of its goals re parallelism, but a so far
unpublished paper of mine skims on that which you could read after it
is published, if you are interested.

Mr. Carter's policy for looping is hacking, not engineering.

|-----------------------------------------------------------------------------|
|"In any case, being guilty of premature optimization has almost no reflection|
|on how good or bad of a programmer you are (something I am not in any |
|position to judge)." |
|-----------------------------------------------------------------------------|

Well I recognize a good language, so I mustn't be completely bad.

|-----------------------------------------------------------------------------|
|"[..] |
| |
|And you have to know that I have been writing optimizing Ada compilers for |
|the last 30 years (well, 29 years and 9 months to be exact), so I know how |
|to performance optimize when necessary. But..." |
|-----------------------------------------------------------------------------|

Infinity per cent times how long I have veen writing Ada
compilers. Many of the pseudoC++ programs I use have recently been
ported to Microsoft Windows, but the currently most important one
(critical pieces of which I am porting to Ada) still has not been
compiled on Windows the last I heard. So maybe I will be ready to
order an RR optimizing compiler for Windows this year, or maybe you
will get round to releasing another GNU/Linux one.

|-----------------------------------------------------------------------------|
|"> Nasser M. Abbasi was not reverting to Ada as a complement to |
|> Mathematica with the objective of producing slower software than |
|> Mathematica. |
| |
|My understanding was that the OP was comparing the readability and |
|ease-of-creation of Fortran and Ada. I saw no indication that he was |
|concerned about the performance." |
|-----------------------------------------------------------------------------|

Well Nasser incorporated neither REVERSE nor WHILE in
news:hv57ap$m0l$1(a)speranza.aioe.org so my contributions to this thread
did not matter.

|-----------------------------------------------------------------------------|
|" And in many cases, the performance of the |
|code isn't very relevant." |
|-----------------------------------------------------------------------------|

True.

|-----------------------------------------------------------------------------|
|" (Remember the 90/10 rule!) [..] |
|[..]" |
|-----------------------------------------------------------------------------|

It is not always that simple. Aside from that, the 90% might be spread
across the code instead of in a single subprogram, which is one factor
as to why traditional UNIX(R) prof is only helpful in particular
circumstances.

|-----------------------------------------------------------------------------|
|"P.S. I wouldn't waste time on suing someone. Only the lawyers make money in |
|most lawsuits. " |
|-----------------------------------------------------------------------------|

Oh I will be suing Pisa so-called "University" for ruining my life
when it misled me about what I would be doing there and forbade me
from speaking out against one of the biggest, buggiest, copy-and-paste
fests I ever saw. Unfortunately, scientific journals do not allow
already public knowledge so the aforementioned paper (which does not
focus on Fortress, but instead contains details re lies re supposedly
optimal SystemC(R) code) must be published before I sue Pisa so-called
"University".

As for making money, who will be the lawyer is a nice person and
helped me while I was short on cash (something which cannot be
truthfully said about participants of this newsgroup: a topic for a
section to name and shame in the paper (those of you who tried to help
me how you could: don't worry!, you are without blame)) so I have no
problem with letting the lawyer profit from putting those culprits in
gaol. At least there they will not find it so easy to promote shoddy
code in safety-critical devices and ruin other people's lives.

Sincerely,
Colin Paul Gloster
From: Nasser M. Abbasi on
On 6/14/2010 12:19 PM, Colin Paul Gloster wrote:

>
> |-----------------------------------------------------------------------------|
> |"> Nasser M. Abbasi was not reverting to Ada as a complement to |
> |> Mathematica with the objective of producing slower software than |
> |> Mathematica. |
> | |
> |My understanding was that the OP was comparing the readability and |
> |ease-of-creation of Fortran and Ada. I saw no indication that he was |
> |concerned about the performance." |
> |-----------------------------------------------------------------------------|
>
> Well Nasser incorporated neither REVERSE nor WHILE in
> news:hv57ap$m0l$1(a)speranza.aioe.org so my contributions to this thread
> did not matter.
>

I just wanted to say that I found your suggestions for code changes to
be very insightful and important.

But because the point of this small example was for me to learn how to
use complex numbers in Ada and compare it to Fortran, and not worry too
much at this time about optimization, I did not modify the code as you
suggested.

In addition, if I changeed the Ada code, I would have to change the
Fortran code to keep it similarly structured as the Ada code, and I did
not want to go down that path.

That is the only reason, and for no other.

regards,
--Nasser
From: Gautier write-only on
On Jun 10, 5:48 pm, Colin Paul Gloster <Colin_Paul_Glos...(a)ACM.org>
wrote:

> Additionally, loop unrolling should be considered.

Just a note for those who are not aware of and who'd be tempted to
unroll loops by hand: compilers are able to unroll loops themselves.
For instance GNAT has a -funroll-loops for a long time (and also -
fpeel-loops and -funswitch-loops).
_________________________________________________________
Gautier's Ada programming -- http://sf.net/users/gdemont/
NB: For a direct answer, e-mail address on the following web site:
http://www.fechtenafz.ethz.ch/wm_email.htm
From: Colin Paul Gloster on
On Thu, 17 Jun 2010, Gautier sent:
|--------------------------------------------------------------------|
|"On Jun 10, 5:48 pm, Colin Paul Gloster <Colin_Paul_Glos...(a)ACM.org>|
|wrote: |
| |
|> Additionally, loop unrolling should be considered. |
| |
|Just a note for those who are not aware of and who'd be tempted to |
|unroll loops by hand: compilers are able to unroll loops themselves.|
|For instance GNAT has a -funroll-loops for a long time (and also - |
|fpeel-loops and -funswitch-loops)." |
|--------------------------------------------------------------------|

It is possible to obtain better performance by both manually unrolling
and having a compiler unroll for you at the same, instead of by
relying on just a compiler or just manual unrolling. See for example
"Computer Architecture: A Quantitative Approach" (in which manual
unrolling was called "symbolic loop unrolling").

Sincerely,
Colin Paul Gloster