From: Richard Maine on
Bart Vandewoestyne <MyFirstName.MyLastName(a)telenet.be> wrote:

> So my problem is solved, but I have one more question: before my
> experiment, i thought that when single precision data was passed
> to a procedure requiring double precision data, the results would
> be 'single precision correct' instead of 'double precision correct'.

For the most part, you just thought wrong. With separate compileation,
which is typical of f77-style implicit interfaces, and which is what you
are using, the compiler doesn't even know that the procedure requires
double precision. The procedure might not have even been written yet
when the main program is compiled. Where is the compiler supposed to get
this information that doesn't yet even exist? When using implicit
interfaces, it is very, very important to understand that the compiler
in general knows nothing about the procedure other than what it can
deduce from looking at the call. If the call has single precision actual
arguments, then the deduction is that the procedure expects single
precision actual arguments. If that deduction is incorrect, you will get
garbage answers.

With modules, the USE provides information about the procedure, allowing
the compiler to check that it is being called correctly. Thus you will
get error messages. There still won't be automatic conversion. that
would break more things than I care to discuss; I'll just leave it at
"won't happen". Sometimes compilers will similarly check implicit
interfaces when the information happens to be available, such as when
the procedure called is in the same file. This seems more common in
recent compilers, but you can't count on it, and there are plenty of
cases where the information isn't available anyway. This is still just a
check - not a correction.

I have worked on a system where passing double actuals to single dummies
would (sort of) work in some cases. I even worked with code where people
took advantage of this and mistakenly thought that the compiler was
doing something special and "smart" about it. Instead, the compiler
didn't even notice and the bit patterns happened to work. That relies on
properties of the bit patterns for single and double - properties that
are quite rare in today's machines. You aren't on one of the rare
machines that has the properties needed. If I recall correctly, the code
in question was on a Gould/SEL, which did have those properties. the
practice caused problems when the code was ported elsewhere.

--
Richard Maine | Good judgement comes from experience;
email: last name at domain . net | experience comes from bad judgement.
domain: summertriangle | -- Mark Twain
From: glen herrmannsfeldt on
Gordon Sande wrote:

(snip)

> To the electronics of the computer there is no connection between
> the (typically) four byte data that YOU call a real and the (typically)
> eight byte date that YOU call double precision. There are some computers
> in which the first four of the eight bytes of the double precision are a
> real but not all do this. Of course there are conversion instructions.
> (In some computers at one time the longer form was for bankers and such
> so was in a decimal format so they were really different. Not common
> anymore but it might come back!)

And on a little endian machine, those bytes would be at the wrong end.

For IBM's Hexadecimal Floating Point (S/360, S/370, and HFP on ESA/390),
the first four bytes of a double precision value are the appropriate
truncated value for single precision. Even then, you wouldn't want to
pass a single precision value where a double precision value was
expected, as unknown values would be used for the rest of the bits.
In the case of an array, single precision values would be taken in
pairs as double precision values.

-- glen

From: Bart Vandewoestyne on
On 2007-02-13, Bart Vandewoestyne <MyFirstName.MyLastName(a)telenet.be> wrote:
>
> [...]
> So my problem is solved, but I have one more question: before my
> experiment, i thought that when single precision data was passed
> to a procedure requiring double precision data, the results would
> be 'single precision correct' instead of 'double precision correct'.
> [...]

Thanks all for explaining this issue in a very understandable
way. I have more insight into the issue now. Again, I have learned
from this newsgroup!

Best wishes,
Bart

--
"Share what you know. Learn what you don't."
From: Bart Vandewoestyne on
On 2007-02-12, Thomas Koenig <Thomas.Koenig(a)online.de> wrote:
>
> You could also use gfortran, use matmul and specify the
> -fexternal-blas option :-)

I must admit: having experimented with it, i find this a very
useful option! Are there any other compilers supporting this
kind of functionality?

Best wishes,
Bart

--
"Share what you know. Learn what you don't."
From: Bart Vandewoestyne on
On 2007-02-13, Walter Spector <w6ws_xthisoutx(a)earthlink.net> wrote:
>
> Bad answers. (As you've discovered.) The caller and callee must
> match. But since you were relying on pre-F90 style 'implicit interfacing',
> your compiler could not tell you in advance that there was a problem.
>
> Depending on whose BLAS you were using, there may be a module that
> you can USE with interface blocks for the calls. This would allow
> the compiler to check the call at compile time.

OK. I have been searching a bit but it is not quite clear to me
what exactly i can use.

I came across:

BLAS: http://www.netlib.org/blas/
LAPACK: http://www.netlib.org/lapack/
ATLAS: http://math-atlas.sourceforge.net/

But there is *a lot* of information on these websites and it's
not easy to find what I need or can use. It would be nice if
somebody could point me in the right direction...

My main question is what software has modules for me that i can
'use' to have available the 'dgemm' routine in Fortran 95 code?

Once i've figured that out, new questions will probably arise,
but I'll keep those for later ;-)

Best wishes,
Bart

--
"Share what you know. Learn what you don't."
First  |  Prev  |  Next  |  Last
Pages: 1 2 3 4
Prev: SLATEC
Next: pictures as comments