From: glen herrmannsfeldt on
Richard Maine <nospam(a)see.signature> wrote:
(snip on in-band signalling of REAL values)

>> Well, you can also forget to test with out-of-band signalling.

> True. But at least when you do so, you don't destroy the flag. Thus if
> you test sometime later, you will at least see that there was a flagged
> value. If you use an in-band signal and you forget to test, then you
> will usually do some computation with the value and quite possibly end
> up with something that is no longer a flag value. You have garbage with
> no evidence that it is garbage. Been there. Done that. Am *NOT* going
> back.

> I maintain that in-band tests are inherently fragile and that this is
> inherent enough to be almost independent of the problem. Failures of
> in-band tests have happened enough times, including in enough cases
> where people thought it would be reliable, that I don't think it needs
> more data. There is lots of data.

> Start with C's in-band string termination. That alone has caused quite
> enough havoc for many lifetimes. Yes, I consider it comparable.

Yes, not my favorite feature of C. Actually, you don't have to
use that feature, but you give up much of the C library if you don't.

> NaNs do have the nice property that they don't go away when you do
> computations on them. In a way, they are also an in-band signal. Ok, I
> suppose more than just in a way. But the hardware understands them, so
> you can't very well forget about them.

Yes, NaN seems a better choice, but not all systems have it.
(Well, a very large fraction by now.) I think I was suggesting
NaN for the systems that have it, including the ability to test
for it, and some other value for ones that don't. The usual
NaN test of (x.ne.x) is likely to be optimized out by some compilers.

-- glen

From: Sjouke Burry on
deltaquattro wrote:
> Hi,
>
> this is really more of a "numerical computing" question, so I cross-
> post to sci.math.num.analysis too. I decided to post on
> comp.lang.fortran, anyway, because here is full of computational
> scientists and anyway there are some sides of the issue specifically
> related to Fortran language.
>
> The problem is this: I am modifying a legacy code, and I need to
> compute some REAL values which I then store in large arrays. Sometimes
> it's impossible to compute these values: for example, think of
> interpolating a table to a given abscissa, it may happen that the
> abscissa falls outside the curve boundaries. I have code which checks
> for this possibility, and if this happens the interpolation is not
> performed. However, now I must "store" somewhere the information that
> interpolation was not possible for that array element, and inform the
> user of it. Since the values can be either positive or negative, I
> cannot use tricks like initializing the array element to a negative
> values.
>
> I'm sure this has happened to you before: which solution did you use?
> Basically, I can think of three ways:
>
> 1. For each REAL array, I declare a LOGICAL array of the same shape,
> which contains 0 for correct values and 1 for missing values. I guess
> that's the cleanest way, but I have a lot of arrays and I'd rather not
> declare an extra array for each of them. I know it's not a memory
> issues (obviously LOGICAL arrays don't occupy a lot of space, even if
> they do are big in my case!), but to me it seems like I'm adding
> redundant code. It would be better to declare arrays of a derived
> type, each element containing a REAL and a LOGICAL, but this would
> force me to modify the code in all the places where the arrays are
> used, and it's quite a big code.
>
> 2. I initialize a missing value to an extremely large positive or
> negative value, like 9e99. I think that's how the problem is usually
> solved in practice, isn't it? I'm a bit worried that this is not
> entirely "clean", since such values could in theory also result from
> the interpolation. However, since reasonable values of all the
> interpolated quantities are usually in the range -100/100, when this
> happens usually it is related to errors in the interpolation table
> data. So most likely it indicates an error which must be signaled to
> the user.
>
> 3. One could initialize the "missing" values to NaN. However, I then
> have to test for the array element being a NaN, when I produce my
> output for the user. From what I remember about Fortran and NaN,
> there's (or there was) no portable way to do this...am I wrong?
>
> I would really appreciate your help on this issue, since I really
> don't know which way to choose and currently I'm stuck! Thanks in
> advance,
>
> Best Regards
>
> Sergio Rossi
Maybe silly, but store a nan value as a marker for no data in the array.
That value should not appear in your valid data.
From: Richard Maine on
glen herrmannsfeldt <gah(a)ugcs.caltech.edu> wrote:

> The usual
> NaN test of (x.ne.x) is likely to be optimized out by some compilers.

I don't consider that the "usual" test. In fact, I strongly recommend
against it. I consider the "usual" test to involve invoking a separate
function for the purpose. For today's compilers, that would be the
appropriate IEEE function. (Don't most of them do that by now?) I
consider writing new code for compilers of yore to generally be
ill-advised except in special cases (and I consider there to fewer
justifiable special cases than there are attempts to claim such
justification.)

Even if one is writing for older compilers, I strongly recommend making
any such NaN test be a separate function rather than putting something
like an x.eq.x test inline. as long as it is a separate function, you
can do whatever it takes - invoke something compiler dependent, bit
twiddle, or even do the x.eq.x thing. Even if you do break down and use
the x.eq.x test, having it in a separate function allows you to
carefully compile that function with special "really don't optimize at
all" flags. It also avoids the possibility that the optimizer might make
your test case work fine, but then mess up a real case because of
different context (unless, of course, you tell it to do inlining or
interprocedural optimization including that function; don't do that).

--
Richard Maine | Good judgment comes from experience;
email: last name at domain . net | experience comes from bad judgment.
domain: summertriangle | -- Mark Twain
From: robin on
"deltaquattro" <deltaquattro(a)gmail.com> wrote in message
news:c1a971ef-60bf-4009-8123-66777a1206a4(a)q12g2000yqj.googlegroups.com...
| Hi,
|
| this is really more of a "numerical computing" question, so I cross-
| post to sci.math.num.analysis too. I decided to post on
| comp.lang.fortran, anyway, because here is full of computational
| scientists and anyway there are some sides of the issue specifically
| related to Fortran language.
|
| The problem is this: I am modifying a legacy code, and I need to
| compute some REAL values which I then store in large arrays. Sometimes
| it's impossible to compute these values: for example, think of
| interpolating a table to a given abscissa, it may happen that the
| abscissa falls outside the curve boundaries. I have code which checks
| for this possibility, and if this happens the interpolation is not
| performed. However, now I must "store" somewhere the information that
| interpolation was not possible for that array element, and inform the
| user of it. Since the values can be either positive or negative, I
| cannot use tricks like initializing the array element to a negative
| values.
|
| I'm sure this has happened to you before: which solution did you use?
| Basically, I can think of three ways:
|
| 1. For each REAL array, I declare a LOGICAL array of the same shape,
| which contains 0 for correct values and 1 for missing values.

You mean, .false. and .ttrue. respectively.

I guess
| that's the cleanest way, but I have a lot of arrays and I'd rather not
| declare an extra array for each of them. I know it's not a memory
| issues (obviously LOGICAL arrays don't occupy a lot of space,

They occupy the same space as INTEGER.
However, your compiler might offer the option of storing them as
byte-sized elements.


From: glen herrmannsfeldt on
Richard Maine <nospam(a)see.signature> wrote:
> glen herrmannsfeldt <gah(a)ugcs.caltech.edu> wrote:

>> The usual
>> NaN test of (x.ne.x) is likely to be optimized out by some compilers.

> I don't consider that the "usual" test. In fact, I strongly recommend
> against it. I consider the "usual" test to involve invoking a separate
> function for the purpose. For today's compilers, that would be the
> appropriate IEEE function. (Don't most of them do that by now?) I
> consider writing new code for compilers of yore to generally be
> ill-advised except in special cases (and I consider there to fewer
> justifiable special cases than there are attempts to claim such
> justification.)

I believe it is the "usual test" at the machine level.
Yes, it is best to have an appropriate function for it.

> Even if one is writing for older compilers, I strongly recommend making
> any such NaN test be a separate function rather than putting something
> like an x.eq.x test inline. as long as it is a separate function, you
> can do whatever it takes - invoke something compiler dependent, bit
> twiddle, or even do the x.eq.x thing. Even if you do break down and use
> the x.eq.x test, having it in a separate function allows you to
> carefully compile that function with special "really don't optimize at
> all" flags. It also avoids the possibility that the optimizer might make
> your test case work fine, but then mess up a real case because of
> different context (unless, of course, you tell it to do inlining or
> interprocedural optimization including that function; don't do that).

Yes.
Though without inlining it may be slower than one would like.

-- glen
First  |  Prev  |  Next  |  Last
Pages: 1 2 3 4 5 6 7 8 9 10
Prev: VAX VMS Fortran Source
Next: New Intel Visual Fortran user