From: Ian Collins on
On 04/10/10 09:25 AM, Peter Olcott wrote:

[give up on reformatting]

> Can I use TCP sockets as the kind of sockets that I am using
> for IPC?

Well stream sockets are closer to pipes than datagram based ones. But
you do loose the advantages cited in Rainer's now horribly mangled list.
If you are sending small packets of data or using multi cast or a
connectionless setup, datagram based sockets make more sense.

So the answer is probably yes. But datagram based sockets may be a
better solution for your problem, assuming delivery is guaranteed.

--
Ian Collins
From: Bob Smith on
Peter Olcott wrote:
> Are there any reliability issues or other caveats with using
> named pipes?

As one reply says "exactly as described", but there are few
things that are of note.

First, a named pipe protects the reader of the pipe. If the
reader does not keep up, it is the writer to the pipe that
get the EPIPE error. I use a fanout device to get around
this problem.

Second, the writer can write about 4K bytes before getting
blocked if the reader is not reading. This is usually not
a problem but was for some of my robotic control software.
I now use a proxy device for this.

The fanout and proxy drivers are not part of the kernel and
are available here: http://www.linuxtoys.org/usd/usd.tar.gz
I have an article that describes how to use them. Please
let me know if you would like a copy.


Bob Smith
From: Peter Olcott on

"Bob Smith" <bsmith(a)linuxtoys.org> wrote in message
news:175497-npj.ln1(a)mail.linuxtoys.org...
> Peter Olcott wrote:
>> Are there any reliability issues or other caveats with
>> using named pipes?
>
> As one reply says "exactly as described", but there are
> few
> things that are of note.
>
> First, a named pipe protects the reader of the pipe. If
> the
> reader does not keep up, it is the writer to the pipe that
> get the EPIPE error. I use a fanout device to get around
> this problem.
>
> Second, the writer can write about 4K bytes before getting
> blocked if the reader is not reading. This is usually not
> a problem but was for some of my robotic control software.
> I now use a proxy device for this.

That sounds dumb. Why not simply grow the queue length to
some arbitrary pre specified length?

>
> The fanout and proxy drivers are not part of the kernel
> and
> are available here:
> http://www.linuxtoys.org/usd/usd.tar.gz
> I have an article that describes how to use them. Please
> let me know if you would like a copy.
>
>
> Bob Smith


From: Moi on
On Sat, 10 Apr 2010 09:57:45 -0500, Peter Olcott wrote:

> "Bob Smith" <bsmith(a)linuxtoys.org> wrote in message
> news:175497-npj.ln1(a)mail.linuxtoys.org...
>> Peter Olcott wrote:
>>> Are there any reliability issues or other caveats with using named
>>> pipes?
>>
>> As one reply says "exactly as described", but there are few
>> things that are of note.
>>
>> First, a named pipe protects the reader of the pipe. If the
>> reader does not keep up, it is the writer to the pipe that get the
>> EPIPE error. I use a fanout device to get around this problem.
>>
>> Second, the writer can write about 4K bytes before getting blocked if
>> the reader is not reading. This is usually not a problem but was for
>> some of my robotic control software. I now use a proxy device for this.
>
> That sounds dumb. Why not simply grow the queue length to some arbitrary
> pre specified length?

Because that would consume precious kernel bufferspace.
In the normal case one *wants* the writer to block. Take for example the
classic chained pipeline of filters used by lp / lpr; the final part of the
chain is the (physical) printer, which is also the slowest part.
Having larger buffers would only result in the final pipe buffer becoming very big.
Better is to shift the congestion upstream: don' produce more output until
the printer can handle it. There is also the chance of the processes in the
pipeline to do some buffering themselves; in userspace, which is cheaper.

BTW: you can always produce a longpipe by creating two pipes with a buffering
process inbetween. The buffer is _again_ in userspace.


HTH,
AvK
From: Peter Olcott on

"Moi" <root(a)invalid.address.org> wrote in message
news:213b3$4bc09722$5350c024$27259(a)cache110.multikabel.net...
> On Sat, 10 Apr 2010 09:57:45 -0500, Peter Olcott wrote:
>
>> "Bob Smith" <bsmith(a)linuxtoys.org> wrote in message
>> news:175497-npj.ln1(a)mail.linuxtoys.org...
>>> Peter Olcott wrote:
>>>> Are there any reliability issues or other caveats with
>>>> using named
>>>> pipes?
>>>
>>> As one reply says "exactly as described", but there are
>>> few
>>> things that are of note.
>>>
>>> First, a named pipe protects the reader of the pipe. If
>>> the
>>> reader does not keep up, it is the writer to the pipe
>>> that get the
>>> EPIPE error. I use a fanout device to get around this
>>> problem.
>>>
>>> Second, the writer can write about 4K bytes before
>>> getting blocked if
>>> the reader is not reading. This is usually not a
>>> problem but was for
>>> some of my robotic control software. I now use a proxy
>>> device for this.
>>
>> That sounds dumb. Why not simply grow the queue length to
>> some arbitrary
>> pre specified length?
>
> Because that would consume precious kernel bufferspace.
> In the normal case one *wants* the writer to block. Take
> for example the
> classic chained pipeline of filters used by lp / lpr; the
> final part of the
> chain is the (physical) printer, which is also the slowest
> part.
> Having larger buffers would only result in the final pipe
> buffer becoming very big.
> Better is to shift the congestion upstream: don' produce
> more output until
> the printer can handle it. There is also the chance of the
> processes in the
> pipeline to do some buffering themselves; in userspace,
> which is cheaper.
>
> BTW: you can always produce a longpipe by creating two
> pipes with a buffering
> process inbetween. The buffer is _again_ in userspace.
>
>
> HTH,
> AvK

My process can handle 100 transactions per second. I was
envisioning a FIFO at least this long. I guess that I have
to change my vision. It does seem to make sense that it is
done this way if the kernel has limited buffer space. Why
does the kernel have limited buffer space?


First  |  Prev  |  Next  |  Last
Pages: 1 2 3 4 5 6 7 8
Prev: Flushing file writes to disk with 100% reliability
Next: SPAM