From: Eli Osherovich on
On May 18, 8:45 pm, brady_fht <brady.m.ad...(a)gmail.com> wrote:
> Okay - I have changed the program slightly (Real declarations, added
> parentheses, and eliminated one variable):
>
> --------------------------------------------------
> PROGRAM DPStudy
>         IMPLICIT NONE
>         REAL(KIND=KIND(0.D0)) :: FVF
>         REAL(KIND=KIND(0.D0)) :: THR
>         REAL(KIND=KIND(0.D0)) :: Dum1
>
>         FVF = 3.3D0
>         THR = 3.0D0
>
>         Dum1 = (FVF * THR) * DSQRT(THR)
>
>         1 FORMAT(A, ES28.22)
>         WRITE(*,1) 'Dum1 = ', Dum1
> STOP
> ENDPROGRAM DPStudy
> --------------------------------------------------
>
> Now I want to focus on only two scenarios:
>
> --------------------------------------------------------
> COMPILER:
> gfortran -m32 -O0 main.f90
> OUTPUT:
> Dum1 = 1.7147302994931884256857E+01
> --------------------------------------------------------
>
> --------------------------------------------------------
> COMPILER:
> gfortran -m64 -O0 main.f90
> OUTPUT:
> Dum1 = 1.7147302994931880704144E+01
> --------------------------------------------------------
>
> So, neglecting optimization the 32 and 64 bit versions of this program
> yield a different result. And the program now uses parentheses to
> force the order of multiplication.
>
> Also, here is the binary representation (all 64 bits) of each result
> (1st line is 32 bit program result, and 2nd line is the 64 bit program
> result):
>
> 0110101110101011100101000110010110101101101001001000110000000010
> 1010101110101011100101000110010110101101101001001000110000000010
>
> So the only two bits that are different are the two least significant
> bits.
>
> I guess I am just confused as to how this number is being computed
> differently. Maybe I'm missing something obvious here or maybe
> somebody is already slapping me in the face with the answer, but I
> can't wrap my head around it.
>
> Thanks!

As somebody suggested earlier, use -S flag to see the assembly code.
I bet you will discover that -m32 causes gfortran to use x87
instructions which -m64 results in SSE code.
Which explains the difference.
From: mecej4 on
brady_fht wrote:

> Hello,
>
> I need some help understanding floating point operations. I am seeing
> a small difference in a program when it is compiled for 32 and 64 bit
> machines. Here is the program:
>
> --------------------------------------------------------
> PROGRAM DPStudy
> IMPLICIT NONE
> REAL(8) :: FVF
> REAL(8) :: THR
> REAL(8) :: Dum1, Dum2
>
> FVF = 3.3D0
> THR = 3.0D0
>
> Dum1 = FVF * THR * DSQRT(THR)
> Dum2 = FVF * DSQRT(THR) * THR
>
> 1 FORMAT(A, ES28.22)
> WRITE(*,*) ''
> WRITE(*,1) 'Dum1 = ', Dum1
> WRITE(*,1) 'Dum2 = ', Dum2
> WRITE(*,*) ''
> STOP
> ENDPROGRAM DPStudy
> --------------------------------------------------------
>
> Pretty simple. I am using gfortran 4.3.3 on Ubuntu 9.04. The following
> shows the different compiler options I use and the resulting output
> from the above program:
>
>
> --------------------------------------------------------
> COMPILER:
> gfortran -m32 -O0 main.f90
>
> OUTPUT:
> Dum1 = 1.7147302994931884256857E+01
> Dum2 = 1.7147302994931884256857E+01
> --------------------------------------------------------
>
> --------------------------------------------------------
> COMPILER:
> gfortran -m64 -O0 main.f90
>
> OUTPUT:
> Dum1 = 1.7147302994931880704144E+01
> Dum2 = 1.7147302994931884256857E+01
> --------------------------------------------------------
>
> --------------------------------------------------------
> COMPILER:
> gfortran -m32 -O1 main.f90
>
> OUTPUT:
> Dum1 = 1.7147302994931880704144E+01
> Dum2 = 1.7147302994931884256857E+01
> --------------------------------------------------------
>
> --------------------------------------------------------
> COMPILER:
> gfortran -m64 -O1 main.f90
>
> OUTPUT:
> Dum1 = 1.7147302994931880704144E+01
> Dum2 = 1.7147302994931884256857E+01
> --------------------------------------------------------
>
>
> So it appears the order of multiplication yields a different result
> which doesn't surprise me all that much. However, it looks like when
> optimization is turned off for the 32 bit case the compiler re-orders
> the order of multiplication. I tried going through the gfortran
> documentation to understand what happens when -O1 is used but I
> couldn't find a single flag that triggers this behavior. Can anybody
> help me understand what is going on here?
>
> For what it's worth - similar behavior was observed using the Intel
> Fortran Compiler 11.1 for Windows.
>
> Thank you!

You should consider changing your viewpoint. We know that 64 bit IEEE reals
have 53 bits of precision, which is approximately 15 to 16 decimal digits
(=log10(53)). Your two "different" results agree to 16 digits. To me, they
_are_ _the_ _same_!

Digits beyond the 16th are trash (artefacts of how the numbers are processed
on a specific CPU with a specific compiler with a specific set of options).
Once you condition yourself to regarding these numbers as being equal, the
futility of printing more than 17 significant digits will be apparent. The
seventeenth (or earlier digits, depending on the algorithm) may also be
junk, but seeing them being different on different processors may be
comforting.

-- mecej4
From: Eli Osherovich on
On May 27, 3:06 pm, mecej4 <mecej4.nyets...(a)operamail.com> wrote:

> Digits beyond the 16th are trash (artefacts of how the numbers are processed
> on a specific CPU with a specific compiler with a specific set of options).
> Once you condition yourself to regarding these numbers as being equal, the
> futility of printing more than 17 significant digits will be apparent. The
> seventeenth (or earlier digits, depending on the algorithm) may also be
> junk, but seeing them being different on different processors may be
> comforting.
>
> -- mecej4

I think the OP demonstrated that those 53 bits are not the same (the
last two being different).
I believe that -m32 and -m64 flags result in different code: x87 and
SSE respectively.