From: Yannick Duchêne (Hibou57) on
Le Wed, 26 May 2010 16:15:41 +0200, Pascal Obry <pascal(a)obry.net> a écrit:

Hi Pascal,

> Maybe you were expecting something that SPARK is not. SPARK is not a
> replacement for Ada.
Yes, I understand that, I did not had another believe with this

> SPARK is designed for highly secure software.
That is where I was not agree with a previous similar sentence from
someone else. I don't see a reason why only critical applications should
benefit of verifiability. This would not be a waste to apply standard
logic to program construction, and I even feel this should be an expected
minimum.

> You
> won't create a Web server, an XML parser or even some kind of simulation
> and GUI with it. In those application domains you need full Ada (binding
> to external libraries, generics, full OO, standard libraries Ada.*,
> Interfaces.* and possibly GNAT.*, full text and stream IO...).
For IO, there are ways to have a specification and hide an implementation,
like SPARK_IO do. For simulation, depends on simulation of what. For GUI,
I agree, as this mostly have to be plastic. For an XML parser, this should
be OK, as this is mainly a kind of state-machine. For a web-server, I
don't have an idea (may be yes for some parts, not for some others).

> I had to create a binding to the OS sockets in SPARK, I can tell you
> that it was not easy... Especially as this was my first experience with
> SPARK!
Well, I was trying to prove a personal implementation of a Deflate stream
(RFC 1951), decocer/encoder.

What matters did you encounter with the OS socket binding ? (if this can
be drawn with no excessive difficulties)

> For embedded control-command application that's another story. I think
> that SPARK has something invaluable to offer.
>
> I have also thought that you can mix SPARK and Ada in the same
> application. Using SPARK in the critical part, and Ada for the rest...
> Don't know how well this would work as I have not gone through this yet.
OK, but what made me disappointed, is mainly that I could not make it
learn/use new rules really properly (well, and the probably coming trouble
with generics as well). I'm pretty sure that if it would be possible it
could be used in many more cases than only the so called high-critical
ones.

> Just my 2 cents of course!
No, was valuable


I may try again, and see can what can be done with this rules problem.


--
There is even better than a pragma Assert: a SPARK --# check.
From: Yannick Duchêne (Hibou57) on
Le Wed, 26 May 2010 16:28:20 +0200, Dmitry A. Kazakov
<mailbox(a)dmitry-kazakov.de> a écrit:
> Yes. I think this could be a direction in which Ada should evolve. It
> should have a modular part equivalent to SPARK, which can be used with
> certain compilation units. So that the programmer could choose the level
> of
> safety he is ready to invest into.

Please, tell more : you mean a kind of pragma or compiler option like the
ones there is for runtime checks ?

By the way, that's an opportunity for two other ideas : why not integrate
the SPARK language definition in the Ada standard ? We may have wordings
in the Ada reference or the annotated reference, stating that is and that
is allowed or disallowed with SPARK. And second thing, related to this
latter : why not an ACAT tests suit targeting SPARK/Ada compilation
capabilities beside of full Ada ? Actually, how can you test an compiler
compliance with SPARK ? I feel you can do it only for full Ada.

> It would be nice to be able to start the project at some middle level (a
> bit higher than of present Ada, but lower than SPARK), and then gradually
> adjust it, as the project evolves.
Like ensuring it is written in a valid SPARK syntax before we know if this
part will really be semantically checked or not ?

> Another 2 cents.
Re-no, re-was re-valuable


--
There is even better than a pragma Assert: a SPARK --# check.
From: Pascal Obry on
Yannick,

> That is where I was not agree with a previous similar sentence from
> someone else. I don't see a reason why only critical applications should
> benefit of verifiability. This would not be a waste to apply standard
> logic to program construction, and I even feel this should be an
> expected minimum.

Right. I think the response to this is pragmatism and cost. For non
critical softwares do you think it is reasonable to use SPARK? This is
only a matter of trade-off I think.

> What matters did you encounter with the OS socket binding ? (if this can
> be drawn with no excessive difficulties)

The fact that your are working on a boundary. The secure SPARK on one
side and the OS non-SPARK foreign world. One difficult part was that
this is IO (stream like) where data are read from the same socket but
two consecutive read won't return the same value. There is way in SPARK
to deal with that, to avoid SPARK complaining that you have ineffective
code... not straight forward!

Pascal.

--

--|------------------------------------------------------
--| Pascal Obry Team-Ada Member
--| 45, rue Gabriel Peri - 78114 Magny Les Hameaux FRANCE
--|------------------------------------------------------
--| http://www.obry.net - http://v2p.fr.eu.org
--| "The best way to travel is by means of imagination"
--|
--| gpg --keyserver keys.gnupg.net --recv-key F949BD3B

From: Dmitry A. Kazakov on
On Wed, 26 May 2010 21:28:58 +0200, Yannick Duch�ne (Hibou57) wrote:

> Le Wed, 26 May 2010 16:28:20 +0200, Dmitry A. Kazakov
> <mailbox(a)dmitry-kazakov.de> a �crit:
>> Yes. I think this could be a direction in which Ada should evolve. It
>> should have a modular part equivalent to SPARK, which can be used with
>> certain compilation units. So that the programmer could choose the level
>> of
>> safety he is ready to invest into.
>
> Please, tell more : you mean a kind of pragma or compiler option like the
> ones there is for runtime checks ?

No run time checks, but an option to tell more about the contract, with
enforced static checks, that this indeed hold. If you have no time, no
guts, or when the algorithm does not allow certain proofs, you just do not
make promises you cannot keep and go further.

> By the way, that's an opportunity for two other ideas : why not integrate
> the SPARK language definition in the Ada standard ? We may have wordings
> in the Ada reference or the annotated reference, stating that is and that
> is allowed or disallowed with SPARK.

I think it should be more than just two levels. But yes, each language
construct and each library operation shall have a contract.

> Actually, how can you test an compiler
> compliance with SPARK ? I feel you can do it only for full Ada.

Likely yes, because there exist legal Ada programs, such that no Ada
compiler could compile.

>> It would be nice to be able to start the project at some middle level (a
>> bit higher than of present Ada, but lower than SPARK), and then gradually
>> adjust it, as the project evolves.

> Like ensuring it is written in a valid SPARK syntax before we know if this
> part will really be semantically checked or not ?

Rather by refining the contracts. When you feel that the implementation is
mature, you can add more promises to the contract of and see if they hold
(=provable). If they don't you could try to re-implement some parts of it.
When you feel that it takes too much time, is impossible to prove, you can
drop the idea to do it formally. You will sill have a gain of deeper
understanding how the thing works and could document why do you think it is
correct, even if that is not formally provable.

--
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: Yannick Duchêne (Hibou57) on
Le Wed, 26 May 2010 21:32:02 +0200, Pascal Obry <pascal(a)obry.net> a écrit:
> Right. I think the response to this is pragmatism and cost. For non
> critical softwares do you think it is reasonable to use SPARK? This is
> only a matter of trade-off I think.
So, depends on how it is balanced.

If the component of an application of an application, is to be executed
thousand of times in a day or in a week, and especially if this is to be
executed automatically, the trade-off should give weight to verifiability,
even for applications which are not know to be critical in the usual sense.

For others, I would say proofs could help to understand why it works (not
just how). This could be nice for pedagogy, as a formal comments is always
more readable than implicit human language (not that I don't like the
latter, just that it is sometime too much weak). And for maintainability,
like modifying this and that for more efficiency, or simply adding a
functionality, with the insurance nothing will be broken on the way.

What kind of trade-off should not care ?

(note: I was thinking about also some other reasons, however not exposed,
to be short)

> [...] where data are read from the same socket but
> two consecutive read won't return the same value. There is way in SPARK
> to deal with that, to avoid SPARK complaining that you have ineffective
> code... not straight forward!
>
> Pascal.
I see what the subject was looking like


I was thinking about something. With my attempted pre/post/check in mind,
I was reading the SPARK source, and noticed one thing : the postconditions
in the SPARK sources, are far less detailed than what I was attempting to
do with my packages. To be honest, I was not only to prove there will not
be any runtime error, I was also using postconditions to express
properties and logics. So obviously, this ends in much more difficult
complicated things to demonstrate.

This may be a rule of thumb with SPARK : first, try to have a program
proved to be runtime error free, then after only, when that done, try to
add more to express logic properties.

May be I was somewhere between proof of absence of runtime error and data
refinement (I mean what in french is named « réification »). Possibly
SPARK do not the best with the latter. Finally, stating a procedure or
function has this and that logic property, is another way to state this is
the concrete implementation of the abstraction which has these same
properties ; so this is refinement/reification.

Perhaps SPARK can do that... a bit, and perhaps I was wrong to expect it
could fully do that.

So this rule of thumb : start with proof of no runtime error first, and
only later try more and don't expect this is required to succeed ?

However, being able to add rules which could be applied the same way
embedded rules are, would really be nice. While I suspect this would not
be compatible with some efficiency requirements of the SPARK Simplifier
(if only I could build the Simplifier from modified source, but I can't)..


--
There is even better than a pragma Assert: a SPARK --# check.