From: Phil Thornley on
I have had a hobby project for a while to create SPARK versions of
data structures, supported by partial proofs of correctness.

However, if the proofs are tackled in the most obvious way then the
effort required to show that the relevant data structure invariant is
maintained by each operation grows out of all proportion to the size
of the code being proved, and it would be easier to validate the code
directly than to validate all the proof artefacts (annotations and
rules) that are generated for the proof.

So I have recently tried an alternative approach that shows some
potential for completing the proofs without requiring disproportionate
effort.

The idea is to determine, for each path through an operation that
exports a value of the data structure, the relationship between the
imported value and exported value and to create a single proof rule
that encapsulates that transformation. If the rule proves the
postcondition VC generated for the path then the code implements the
transformation.

This is clearly a less rigorous approach than proving that the code
maintains all the individual conditions within the invariant, but it
is much more practicable.

As an example of this approach I have completed the proofs for an
ordered list of integers and put this up on http://www.sparksure.com
(I have included a stack and a queue as well, but neither of these
require data structure invariants).

I am very interested in all comments that anybody might have about
this idea - either here on cla or via the email address on the
website.

Phil Thornley

From: Martina Marinello on
Hi Phil,

you may be interesting on having a look at what is going on on the Hi-
Lite project:

http://www.open-do.org/projects/hi-lite/

and in particular this discussion:

http://lists.forge.open-do.org/pipermail/hi-lite-discuss/2010-June/000129.html



Cheers,

Matteo


On Jul 20, 2:40 pm, Phil Thornley <phil.jpthorn...(a)gmail.com> wrote:
> I have had a hobby project for a while to create SPARK versions of
> data structures, supported by partial proofs of correctness.
>
> However, if the proofs are tackled in the most obvious way then the
> effort required to show that the relevant data structure invariant is
> maintained by each operation grows out of all proportion to the size
> of the code being proved, and it would be easier to validate the code
> directly than to validate all the proof artefacts (annotations and
> rules) that are generated for the proof.
>
> So I have recently tried an alternative approach that shows some
> potential for completing the proofs without requiring disproportionate
> effort.
>
> The idea is to determine, for each path through an operation that
> exports a value of the data structure, the relationship between the
> imported value and exported value and to create a single proof rule
> that encapsulates that transformation.  If the rule proves the
> postcondition VC generated for the path then the code implements the
> transformation.
>
> This is clearly a less rigorous approach than proving that the code
> maintains all the individual conditions within the invariant, but it
> is much more practicable.
>
> As an example of this approach I have completed the proofs for an
> ordered list of integers and put this up onhttp://www.sparksure.com
> (I have included a stack and a queue as well, but neither of these
> require data structure invariants).
>
> I am very interested in all comments that anybody might have about
> this idea - either here on cla or via the email address on the
> website.
>
> Phil Thornley

From: Matteo Bordin on
Hi Phil,

you may be interested in having a look at Hi-Lite:

http://www.open-do.org/projects/hi-lite/

and in particular this discussion:

http://lists.forge.open-do.org/pipermail/hi-lite-discuss/2010-June/000129.html

Cheers,

Matteo



On Jul 20, 2:40 pm, Phil Thornley <phil.jpthorn...(a)gmail.com> wrote:
> I have had a hobby project for a while to create SPARK versions of
> data structures, supported by partial proofs of correctness.
>
> However, if the proofs are tackled in the most obvious way then the
> effort required to show that the relevant data structure invariant is
> maintained by each operation grows out of all proportion to the size
> of the code being proved, and it would be easier to validate the code
> directly than to validate all the proof artefacts (annotations and
> rules) that are generated for the proof.
>
> So I have recently tried an alternative approach that shows some
> potential for completing the proofs without requiring disproportionate
> effort.
>
> The idea is to determine, for each path through an operation that
> exports a value of the data structure, the relationship between the
> imported value and exported value and to create a single proof rule
> that encapsulates that transformation.  If the rule proves the
> postcondition VC generated for the path then the code implements the
> transformation.
>
> This is clearly a less rigorous approach than proving that the code
> maintains all the individual conditions within the invariant, but it
> is much more practicable.
>
> As an example of this approach I have completed the proofs for an
> ordered list of integers and put this up onhttp://www.sparksure.com
> (I have included a stack and a queue as well, but neither of these
> require data structure invariants).
>
> I am very interested in all comments that anybody might have about
> this idea - either here on cla or via the email address on the
> website.
>
> Phil Thornley

From: Phil Thornley on
On 20 July, 15:33, Matteo Bordin <matteo.bor...(a)gmail.com> wrote:

> you may be interested in having a look at Hi-Lite:
>
> http://www.open-do.org/projects/hi-lite/
>
> and in particular this discussion:
>
> http://lists.forge.open-do.org/pipermail/hi-lite-discuss/2010-June/00...

Thanks for that - a very relevant discussion! However I did notice
this in one of the contributions:
"Our major focus is to make user proofs easy. Whether or not we prove
the implementations of such containers is to be decided latter and not
our current focus."

But my main interest is in proving the implementations so fortunately
I'm not directly duplicating this work.

Cheers,

Phil
From: Peter C. Chapin on
On 2010-07-20 08:40, Phil Thornley wrote:

> I have had a hobby project for a while to create SPARK versions of
> data structures, supported by partial proofs of correctness.

I can't comment right now on the approaches you mentioned, but I think
in general it's a great idea to work up some SPARK versions of classic
data structures (with proofs at some reasonable level). An open source
high integrity collections library would be a great contribution, it
seems to me.

Alas right now SPARK's limitations regarding generics seems like a bit
of a show stopper. Users would have to manually specialize each data
structure and re-do the proofs. Of course that would be easier than
building the code from scratch.

I know Praxis is working on generics support in SPARK. Once that was
ready it probably wouldn't be too hard for the original author of a high
integrity collections library to generalize the code as appropriate. At
least that would be my hope.

Peter