From: Andy Glew "newsgroup at on
On 7/29/2010 12:31 PM, nmm1(a)cam.ac.uk wrote:
> In article<aih356to4nkpn2518a8r80not679f5htol(a)4ax.com>,
> George Neuner<gneuner2(a)comcast.net> wrote:

> We are agreed there. However, (a) asynchronous I/O was introduced
> later than threading (seriously) and (b) that's no excuse for making
> a pig's ear of it.

Well, you know, some of us were proposing asynchronous I/O at the same
time as threading. (I wasn't on POSIX, but my boss was. Hi Johm if you
are out there!)

Unfortunately, I think it was HP that came along with an asynch proposal
that they had implemented on top of kernel threading. Which ruined their
mindset.
From: nmm1 on
In article <e01ddfe6-280f-485d-87ae-5b8b9c6f0f61(a)l20g2000yqm.googlegroups.com>,
MitchAlsup <MitchAlsup(a)aol.com> wrote:
>On Jul 29, 11:50=A0am, George Neuner <gneun...(a)comcast.net> wrote:
>> On Wed, 28 Jul 2010 19:28:54 +0100 (BST), n...(a)cam.ac.uk wrote:
>> >POSIX asynchronous I/O is the real mess. =A0Not merely does it allow
>> >the user to specify any accessible location as a buffer, which is
>> >incompatible with most forms of DMA, it doesn't forbid the program
>> >from reading the contents of a buffer with data being read into it
>> >asynchronously. =A0And appending is specified to occur in the order
>> >of the aio_write calls, whatever that means ...
>>
>> It's a sequential model. =A0
>>
>> Preventing concurrent access isn't really possible unless the buffer
>> page is mapped out of the process during DMA.
>
>In my considered opinion. Any program, that expects to operate under
>any reasonable definition of correctness, cannot be concurrently
>accessing any buffer that has I/O scheduled to/from it.

The case of reading a buffer that is in the process of being written
(i.e. two read-only actions) is debatable. I agree with you, but
there are arguments in the other direction.


Regards,
Nick Maclaren.
From: nmm1 on
In article <f50456ph3g65flpulq3n1bin6pdlil73qd(a)4ax.com>,
George Neuner <gneuner2(a)comcast.net> wrote:
>
>>See the specification of aio_read: "For any system action that changes
>>the process memory space while an asynchronous I/O is outstanding to the
>>address range being changed, the result of that action is undefined."
>>
>>Even ignoring the 'minor' detail that this says "system action" and
>>it is reasonable to interpret that as not applying to the program,
>>that says "changes" and not "accesses". So it explicitly implies
>>that a system action may read data with an aio_read outstanding on
>>it. Well, that's not sane.
>
>Badly written, I agree. I could argue that this is a paraphrasing of
>the spec and not the spec itself ... but I see your concern.

Eh? I was quoting the specification itself!

>You and I have different ideas about what constitutes synchronization.
>
>From the page for aio.h:
>"The aio_sigevent member [of struct aiocb] defines the notification
>method to be used on I/O completion. If aio_sigevent.sigev_notify is
>SIGEV_NONE, no notification is posted on I/O completion, but the error
>status for the operation and the return status for the operation shall
>be appropriately set."
>
>It's quite clear that in almost every case the caller is expected to
>provide an event and then wait for it to be signaled before touching
>the buffer.

In all reasonable standards, explicit constraints supersede implicit
ones. And the explicit constraint on when and how a buffer may be
used is what I quoted.

>As I said before, there isn't any way to keep a program from doing
>something stupid other than to prevent it with hardware. VMM page
>granularity makes that an unreasonable step to take.

That's irrelevant. What is at issue is what an implementation is
required to provide and what a conforming program may do and expect
to work. That is the primary purpose of a specification.

>>however you cut it, POSIX's specification of asynchronous I/O
>>is a disaster area.
>
>The interface is not well designed, I agree. It would be made less
>dangerous simply by severing the program's connection to the control
>block and providing the signal event (maybe in thread local storage)
>rather than asking for one. There aren't many legitimate reasons for
>handing off a chunk of address space and forgetting about it. If a
>program makes an aio_write call and then gracefully exits, there is an
>expectation that the write will complete so the process can't
>terminate until the call completes. Given that, there's no harm to
>requiring the signal event.

Er, no. There is nothing wrong with those approaches, or their
converse, but the defects of the design have little to do with such
minutiae and a great deal to do with the fact that it was not
designed by people who understood the area.

I haven't even mentioned the issue of alignment, which is a major
issue on many architectures. There are a LOT where simultaneous
access to the same cache line by the CPU and I/O can't be made
reliable. And it goes on from there.


Regards,
Nick Maclaren.
From: Terje Mathisen "terje.mathisen at on
nmm1(a)cam.ac.uk wrote:
> In article<e01ddfe6-280f-485d-87ae-5b8b9c6f0f61(a)l20g2000yqm.googlegroups.com>,
> MitchAlsup<MitchAlsup(a)aol.com> wrote:
>> In my considered opinion. Any program, that expects to operate under
>> any reasonable definition of correctness, cannot be concurrently
>> accessing any buffer that has I/O scheduled to/from it.

That is so obvious that it shouldn't even need to be written down.
Unfortunately "obvious" is obviously a very non-obvious term, or
something like that.
>
> The case of reading a buffer that is in the process of being written
> (i.e. two read-only actions) is debatable. I agree with you, but

So you meant to write 'in the process of being read'?

> there are arguments in the other direction.

Right. Like, if the OS knows that it has exclusive access to a buffer,
it can employ all sorts of dirty tricks with safety, up to and including
temporarily modifying parts of it (like a sentinel value on top of the
buffer end), but it probably shouldn't ever do so.

The pain will far outweigh any possible gain. :-)

Terje

--
- <Terje.Mathisen at tmsw.no>
"almost all programming can be viewed as an exercise in caching"
From: nmm1 on
In article <f2mai7-jes.ln1(a)ntp.tmsw.no>,
Terje Mathisen <"terje.mathisen at tmsw.no"> wrote:
>nmm1(a)cam.ac.uk wrote:
>> In article<e01ddfe6-280f-485d-87ae-5b8b9c6f0f61(a)l20g2000yqm.googlegroups.com>,
>> MitchAlsup<MitchAlsup(a)aol.com> wrote:
>>> In my considered opinion. Any program, that expects to operate under
>>> any reasonable definition of correctness, cannot be concurrently
>>> accessing any buffer that has I/O scheduled to/from it.
>
>That is so obvious that it shouldn't even need to be written down.
>Unfortunately "obvious" is obviously a very non-obvious term, or
>something like that.

Yes, er, unobviously :-)

>> The case of reading a buffer that is in the process of being written
>> (i.e. two read-only actions) is debatable. I agree with you, but
>
>So you meant to write 'in the process of being read'?

No, I meant what I said, but meant 'being written to the device'.

>> there are arguments in the other direction.
>
>Right. Like, if the OS knows that it has exclusive access to a buffer,
>it can employ all sorts of dirty tricks with safety, up to and including
>temporarily modifying parts of it (like a sentinel value on top of the
>buffer end), but it probably shouldn't ever do so.
>
>The pain will far outweigh any possible gain. :-)

I strongly disagree, in a well-designed asynchronous interface, but
I strongly agree in a POSIX-like one!


Regards,
Nick Maclaren.