From: glen herrmannsfeldt on
m_b_metcalf <michaelmetcalf(a)compuserve.com> wrote:
> On May 16, 9:07�pm, Thomas Koenig <tkoe...(a)netcologne.de> wrote:
>> Consider the following program:

>> program main
>> � implicit none
>> � real, dimension(1) :: a, b
>> � real, dimension(1,1) :: aa, bb
>> � a(1) = -1.0
>> � b(1) = 0.0
>> � print *,a(1)*b(1)
>> � print *,dot_product(a,b)
>> � aa(1,1) = -1.0
>> � bb(1,1) = 0.0
>> � print *,matmul(aa,bb)
>> end program main

>> Assuming that the processor supports signed zeros, is the following
>> result OK?

>> � -0.0000000
>> � �0.0000000
>> � �0.0000000

> The standard states; "Processors that distinguish between positive and
> negative zeros shall treat them as equivalent ... as actual arguments
> to intrinsic functions other than SIGN ..."

I read that one, and didn't even consider how it might apply here.

It seems to me that the standard at various points is trying
to guarantee that (-0.0).EQ.(+0.0), which is true, as far as
I know, on all processors that allow for a signed zero.
That is what I think of when they say "equivalent."
To me, "equivalent" does not mean bitwise identical.

I am not sure which processors generate (-0.0) as the product.
Most likely not all that generate one from -(0.0). (That is,
the unary negation operator.)

It seems to me that the natural way to write either dot_product
or matrix_multiply is to initialize a variable to zero and then
sum into that variable, which in this case gives (0.0)+(-0.0)
as the result.

The question left, then, as I see it, is what the I/O library
should print given a negative zero.

> Thus, the result seems correct (and I haven't a clue what Robin is
> wittering on about).

He often uses the "answer first, read question later" approach,
which seems to apply in this case. I suppose some machines might
give a very small, non-zero, result in this case. That would
seem to be very surprising, but maybe not impossible. The product,
in some sense, has zero significant digits.

This reminds me of a feature of the IBM Stretch that allowed one
to specify at run time whether to shift in zero or one when doing
floating point post-normalization. The idea was that one could
run a program both ways and compare the result. In some, but
not all, cases that would show up errors due to loss of
significance. It might even be possible that (-1.0) times (0.0)
would return (plus or minus) epsilon(1.0) on Stretch, though
that would surprise me.

-- glen
From: Ron Shepard on
In article <hsrbs9$6us$1(a)speranza.aioe.org>,
glen herrmannsfeldt <gah(a)ugcs.caltech.edu> wrote:

> It seems to me that the natural way to write either dot_product
> or matrix_multiply is to initialize a variable to zero and then
> sum into that variable, which in this case gives (0.0)+(-0.0)
> as the result.

If my memory is correct, matmul() is not defined in terms of
dot_product() in the standard, it is defined with sum(). I don't
remember how dot_product is defined, but it is possible that a
compiler might evaluate sum() and matmul() differently than
dot_product() in this respect. For example, one or the other might
treat the size()=1 case differently. Or one or the other might be
evaluated as

sum=a(1)*b(1)
do i = 2, n
sum = sum + a(i)*b*i)
enddo

which could explain some of the differences observed in some of the
compilers. The above loop would preserve the negative zero, whereas
running the loop from 1 would destroy the negative zero.

> The question left, then, as I see it, is what the I/O library
> should print given a negative zero.

Yes, particularly since list-directed I/O was used. To see the
bits, you might need to use a Z format, which although common is
nonstandard for real variables.

> [...] The product,
> in some sense, has zero significant digits.

I'm not sure I agree with this. I would say that the product itself
has full precision (if the result is either positive or negative
zero). I guess the sum (0.0)+(-0.0) might be said to have zero
significant digits, but this does not seem to be a useful convention
either. For example, if you were to track the significant digits
through several expressions to determine the accuracy of the final
result, then the convention that (0.0)+(-0.0), and all subsequent
operations have no significant digits isn't the correct answer in a
practical sense.

> This reminds me of a feature of the IBM Stretch that allowed one
> to specify at run time whether to shift in zero or one when doing
> floating point post-normalization. The idea was that one could
> run a program both ways and compare the result. In some, but
> not all, cases that would show up errors due to loss of
> significance. It might even be possible that (-1.0) times (0.0)
> would return (plus or minus) epsilon(1.0) on Stretch, though
> that would surprise me.

The CRAY computers also allowed the least significant bit to be
wrong for floating point multiply (and the result depended on the
order the two operands were loaded into registers), although I think
multiplication by zero was always correct.

$.02 -Ron Shepard
From: glen herrmannsfeldt on
Ron Shepard <ron-shepard(a)nospam.comcast.net> wrote:
> In article <hsrbs9$6us$1(a)speranza.aioe.org>,
> glen herrmannsfeldt <gah(a)ugcs.caltech.edu> wrote:

>> It seems to me that the natural way to write either dot_product
>> or matrix_multiply is to initialize a variable to zero and then
>> sum into that variable, which in this case gives (0.0)+(-0.0)
>> as the result.

> If my memory is correct, matmul() is not defined in terms of
> dot_product() in the standard, it is defined with sum(). I don't
> remember how dot_product is defined, but it is possible that a
> compiler might evaluate sum() and matmul() differently than
> dot_product() in this respect. For example, one or the other might
> treat the size()=1 case differently. Or one or the other might be
> evaluated as

> sum=a(1)*b(1)
> do i = 2, n
> sum = sum + a(i)*b*i)
> enddo

Yes, I might call that the unnatural way to write it.
Also, if done inline with dimensions known at compile time,
it could easily be sum=a(1)*b(1).

> which could explain some of the differences observed in some of the
> compilers. The above loop would preserve the negative zero, whereas
> running the loop from 1 would destroy the negative zero.

>> The question left, then, as I see it, is what the I/O library
>> should print given a negative zero.

> Yes, particularly since list-directed I/O was used. To see the
> bits, you might need to use a Z format, which although common is
> nonstandard for real variables.

Well, there is IEEE_IS_NEGATIVE() to test for negative values,
including negative zero.

>> [...] The product,
>> in some sense, has zero significant digits.

> I'm not sure I agree with this. I would say that the product itself
> has full precision (if the result is either positive or negative
> zero).

I should ask in a physics newsgroup. It seems that zero can have
either zero or infinite significant digits.

> I guess the sum (0.0)+(-0.0) might be said to have zero
> significant digits, but this does not seem to be a useful convention
> either. For example, if you were to track the significant digits
> through several expressions to determine the accuracy of the final
> result, then the convention that (0.0)+(-0.0), and all subsequent
> operations have no significant digits isn't the correct answer in a
> practical sense.

Well, consider x=123.456-123.456

now how many digits does x have?

>> This reminds me of a feature of the IBM Stretch that allowed one
>> to specify at run time whether to shift in zero or one when doing
>> floating point post-normalization. The idea was that one could
>> run a program both ways and compare the result. In some, but
>> not all, cases that would show up errors due to loss of
>> significance. It might even be possible that (-1.0) times (0.0)
>> would return (plus or minus) epsilon(1.0) on Stretch, though
>> that would surprise me.

> The CRAY computers also allowed the least significant bit to be
> wrong for floating point multiply (and the result depended on the
> order the two operands were loaded into registers), although I think
> multiplication by zero was always correct.

CRAY was good at fast and close enough, where others went for
slow and always rights. In physics, most quantities have a
relative error, that is, an error that scales with the size of
the quantity. Floating point is well designed for such quantities,
and CRAY was designing for physicists.

I didn't look to see how Stretch did multiply, but after a floating
point multiply with normalized operands there is a potential one
digit post normalization. If, following the "shift in ones"
option, Stretch multply did that you would get something close
to epsilon(1.) from 1.0*0.0, but also consider the case
1.0*(1.0-1.0).

-- glen
From: robin on
"glen herrmannsfeldt" <gah(a)ugcs.caltech.edu> wrote in message news:hsrbs9$6us$1(a)speranza.aioe.org...

| He often uses the "answer first, read question later" approach,
| which seems to apply in this case. I suppose some machines might
| give a very small, non-zero, result in this case. That would
| seem to be very surprising, but maybe not impossible. The product,
| in some sense, has zero significant digits.

The product has about k significant digits,
where k is the precision of default REAL.

Another of your "answer first, read question later" approach.