From: robin on
"Richard Maine" <nospam(a)see.signature> wrote in message news:1jckfrv.w6yp2rh7f8q7N%nospam(a)see.signature...
| Paul Hirose <jvcmz89uwf(a)earINVALIDthlink.net> wrote:
|
| > Nowadays even a budget Windows machine does floating point in hardware
| > per the IEEE standard, so I believe arc cosine of -1 is a safe way to
| > compute pi, for example. However, I'd be interested to hear of any
| > cases where this would have caused trouble.
|
| Ouch! Ouch! Ouch!
|
| Hey, Fortran is often used for numeric stuff and us folk are supposed to
| have at least a surface acquaintance with issues of numeric accuracy.
| The arc cosine of -1 is just about the worst case I can think of off the
| top of my head in terms of numerical robustness. Hint: look at the arc
| cosine of numbers very close to -1.

That's irrelevant. (Hint : How far away is -1 from -1 ?)

One would expect that an argument of -1 would be a special case.

| If one is going to use trig
| expressions for things like that, at least use them at decently robust
| points. That's independent of the language and the hardware - just plain
| numerics.

Just for the record --

WRITE (*, '(F20.17)' ) 4*ATAN(1.0D0)
WRITE (*, '(F20.17)' ) ACOS(-1.D0)
WRITE (*, '(F20.17)' ) 2*ASIN(1.0D0)

produce --

3.14159265358979312
3.14159265358979312
3.14159265358979312



From: Giorgio Pastore on
Richard Maine wrote:
....
> Am I going to go to the trouble of trying it on numerous compilers
> (including ones that I don't have handy current access to)? No. Have I
> bothered to keep records of such a particular from the past? No.
>
> But do I know enough about numerics to consider it a real problem? Yes.
> It doesn't take much numeric knowledge to be concerned about it.
>
> As I mentioned, consider the result of input values slightly perturbed
> from the nominal -1.0. That kind of consideration is the basis of much
> of the study of numerical stability.
>
> For a value 1 bit smaller in magnitude than -1.0, the acos ought to be
> off by quite a bit. No, I'm not going to sit down and run a case (though
> I think I recall people doing such and posting the results here in the
> past). Just look at the shape of the acos curve there; no computer
> required. For a value 1 bit larger in magnitude than -1.0, there is no
> correct answer.

Nobody is asking you to try different compilers or to have kept records
from the past. I was just asking if you ever had *direct experience* of
inaccurate pi from acos(-1.0).

I do not find convincing your point about the shape of arccos(x) close
to -1 and the effect of small perturbations of -1.0.
Firstly, because -1.0 is exactly representable and one needs 1.0 agument
to get pi, nothing else. But, more important, because what really
matters is the algorithm used for implementing acos.
And algorithms for acos may work much better than what could be guessed
by any a priori analysis. For example, acos could be evaluated through
atan. Or something else.

I agree that some caution is in order (but some paranoia is always safe;
in the past, I even found a compiler thinking that sqrt(1.0) was 1.2
!). But serious numerical work always requires to check and compare
results from the same calculation on different platforms.

....

> I have used literals for pi for over 40 years of intensive numerical
> computing (I started in 1968). I have never seen evidence of trouble
> with that.
>

Ok. That was your experience. My experience (I started in 1975) has been
that I have never seen evidence of problems with acos(-1.0) over more
than 100 different combinations of hardware and software (probably more
than 100, but I did not bother to keep records of the exact number :-) ).

I am just curious to know if anybody here has ever had trouble with
acos(-1.0). I do not want you or anybody else would abandon your safe
programming practices! I am just asking about direct experiences, not
theoretical analysis.

Giorgio
From: glen herrmannsfeldt on
Giorgio Pastore <pastgio(a)units.it> wrote:
(medium sized snip)

> I do not find convincing your point about the shape of arccos(x) close
> to -1 and the effect of small perturbations of -1.0.
> Firstly, because -1.0 is exactly representable and one needs 1.0 agument
> to get pi, nothing else. But, more important, because what really
> matters is the algorithm used for implementing acos.
> And algorithms for acos may work much better than what could be guessed
> by any a priori analysis. For example, acos could be evaluated through
> atan. Or something else.

> I agree that some caution is in order (but some paranoia is always safe;
> in the past, I even found a compiler thinking that sqrt(1.0) was 1.2
> !). But serious numerical work always requires to check and compare
> results from the same calculation on different platforms.

As I wrote before, at least one library computes ACOS(1.0) as

PI/2+PI/2-2*ASIN(SQRT((1-X)/2))

ASIN(X) = X for very small arguments (the specific library routine
also indicates that.)

That probably isn't so unusual, but there is no guarantee.

If one generates the minimax polynomial expansion around X=-1,
it is likely that the constant term isn't the closest approximation
to PI.

I have not thought about CORDIC for a while, so I don't know
what it does in this case. There have been some bugs found
over the years in the CORDIC algorithms in HP calculators for some
trig functions and some arguments. CORDIC might be used by some
hardware implementations, not likely for software.

-- glen

From: James Van Buskirk on
"Steven Correll" <steven.correll(a)gmail.com> wrote in message
news:4ba460e5-92b4-4a3b-8fcb-1c6eb3117838(a)14g2000yqp.googlegroups.com...

> But whatever answer the compiler-writers chose,
> it won't bother you if you chose to hard-code the constants yourself.

The thing that bugs me is readability and maintainability. If you
have a bunch of constants written out as literals, they must all be
to at least 34 digits accuracy because someone could INCLUDE your
code in a quad-precision module.

To see what I'm talking about, get something like
http://home.comcast.net/~kmbtib/fft49t.i90 and
http://home.comcast.net/~kmbtib/fft49t.f90 .

In fft49t.f90, module mykinds, change '15' to '30', compile and run.
Now, if the constants weren't written out to the 35 or so digits
accuracy that they are in fft49t.i90, the errors would no longer be
around 1.0e-33 as they are. And yet only 7 or 16 digits of the
constants are ordinarily needed because normally one doesn't carry
out quad-precision FFTs. But even a single error in any of the
constants would trash at least the quad-precision results. To
check the constants by hand would be so onerous that no one would
even think about doing it. Having literal constants that are as
unreadable as this sort of defeats the purpose of working in a
high-level language. That's why I like the initialization
expression technique where applicable: you have a fighting chance
to look at them and decide whether they are what you intended,
even if they are ultimately used in a quad-precision context.

--
write(*,*) transfer((/17.392111325966148d0,6.5794487871554595D-85, &
6.0134700243160014d-154/),(/'x'/)); end


From: glen herrmannsfeldt on
James Van Buskirk <not_valid(a)comcast.net> wrote:
(snip)

> The thing that bugs me is readability and maintainability. If you
> have a bunch of constants written out as literals, they must all be
> to at least 34 digits accuracy because someone could INCLUDE your
> code in a quad-precision module.

> To see what I'm talking about, get something like
> http://home.comcast.net/~kmbtib/fft49t.i90 and
> http://home.comcast.net/~kmbtib/fft49t.f90 .

I guess I haven't looked at FFT source recently. I thought the
ones I remembered generated the twiddle factors as successive
square roots of (0.,1.).

> In fft49t.f90, module mykinds, change '15' to '30', compile and run.
> Now, if the constants weren't written out to the 35 or so digits
> accuracy that they are in fft49t.i90, the errors would no longer be
> around 1.0e-33 as they are. And yet only 7 or 16 digits of the
> constants are ordinarily needed because normally one doesn't carry
> out quad-precision FFTs. But even a single error in any of the
> constants would trash at least the quad-precision results. To

I don't want to get into any more arguments on the range reduction
of trig. functions, but having to multiply by pi such that the
function can then divide by pi doesn't seem so useful. One might
hope that the value returned by ACOS(-1.D0) is appropriate for the
range reduction of COS(), independent of its accuracy.

> check the constants by hand would be so onerous that no one would
> even think about doing it. Having literal constants that are as
> unreadable as this sort of defeats the purpose of working in a
> high-level language. That's why I like the initialization
> expression technique where applicable: you have a fighting chance
> to look at them and decide whether they are what you intended,
> even if they are ultimately used in a quad-precision context.

I think I have said this too many times before, but it would
SIND() and COSD() in Fortran, as some other languages have.
Binary fractions of 180 stay exact, and the argument range
reduction can be done easily and pretty much exactly.

Though I am still not sure of the need for a quad precision FFT.

As I said previously, there are times when high precision is
needed, but high accuracy isn't, for example when subtracting
values that are close. That might be true in some FFT cases.

-- glen