From: Moi on
On Sat, 10 Apr 2010 10:30:40 -0500, Peter Olcott wrote:

> "Moi" <root(a)invalid.address.org> wrote in message
> news:213b3$4bc09722$5350c024$27259(a)cache110.multikabel.net...
>> On Sat, 10 Apr 2010 09:57:45 -0500, Peter Olcott wrote:
>>
>>> "Bob Smith" <bsmith(a)linuxtoys.org> wrote in message
>>> news:175497-npj.ln1(a)mail.linuxtoys.org...
>>>> Peter Olcott wrote:
>>>>> Are there any reliability issues or other caveats with using named
>>>>> pipes?
>>>>
>>>> As one reply says "exactly as described", but there are few
>>>> things that are of note.
>>>>
>>>> First, a named pipe protects the reader of the pipe. If the
>>>> reader does not keep up, it is the writer to the pipe that get the
>>>> EPIPE error. I use a fanout device to get around this problem.
>>>>
>>>> Second, the writer can write about 4K bytes before getting blocked if
>>>> the reader is not reading. This is usually not a problem but was for
>>>> some of my robotic control software. I now use a proxy device for
>>>> this.
>>>
>>> That sounds dumb. Why not simply grow the queue length to some
>>> arbitrary
>>> pre specified length?
>>
>> Because that would consume precious kernel bufferspace. In the normal
>> case one *wants* the writer to block. Take for example the
>> classic chained pipeline of filters used by lp / lpr; the final part of
>> the
>> chain is the (physical) printer, which is also the slowest part.
>> Having larger buffers would only result in the final pipe buffer
>> becoming very big.
>> Better is to shift the congestion upstream: don' produce more output
>> until
>> the printer can handle it. There is also the chance of the processes in
>> the
>> pipeline to do some buffering themselves; in userspace, which is
>> cheaper.
>>
>> BTW: you can always produce a longpipe by creating two pipes with a
>> buffering
>> process inbetween. The buffer is _again_ in userspace.
>>
>>
>> HTH,
>> AvK
>
> My process can handle 100 transactions per second. I was envisioning a
> FIFO at least this long. I guess that I have to change my vision. It
> does seem to make sense that it is done this way if the kernel has
> limited buffer space. Why does the kernel have limited buffer space?

cat /dev/zero | dd | >dev/null

What would happen if the kernel would have unlimited bufferspace ?

Also; you are confusing "throughput" (tps) with chunk-size.
The above dd pipe easily handles 2 * 100 MB data transfer per second.
Every read and every write of these is atomic. (modulo sizeof buffer
or PIPE_MAX, say 512) That is a lot more than 100 tps.

AvK
From: Peter Olcott on

"Moi" <root(a)invalid.address.org> wrote in message
news:c16b2$4bc09d70$5350c024$2267(a)cache100.multikabel.net...
> On Sat, 10 Apr 2010 10:30:40 -0500, Peter Olcott wrote:
>
>>> BTW: you can always produce a longpipe by creating two
>>> pipes with a
>>> buffering
>>> process inbetween. The buffer is _again_ in userspace.
>>>
>>>
>>> HTH,
>>> AvK
>>
>> My process can handle 100 transactions per second. I was
>> envisioning a
>> FIFO at least this long. I guess that I have to change my
>> vision. It
>> does seem to make sense that it is done this way if the
>> kernel has
>> limited buffer space. Why does the kernel have limited
>> buffer space?
>
> cat /dev/zero | dd | >dev/null
>
> What would happen if the kernel would have unlimited
> bufferspace ?
>
> Also; you are confusing "throughput" (tps) with
> chunk-size.

No I am not. Because my process can handle 100 transactions
per second a queue length of 100 transactions is reasonable.
I will just have to adjust my design I guess, possibly use
another form of IPC.

> The above dd pipe easily handles 2 * 100 MB data transfer
> per second.
> Every read and every write of these is atomic. (modulo
> sizeof buffer
> or PIPE_MAX, say 512) That is a lot more than 100 tps.
>
> AvK


From: Moi on
On Sat, 10 Apr 2010 10:50:58 -0500, Peter Olcott wrote:

> "Moi" <root(a)invalid.address.org> wrote in message
>
>>
>> cat /dev/zero | dd | >dev/null
>>
>> What would happen if the kernel would have unlimited bufferspace ?
>>
>> Also; you are confusing "throughput" (tps) with chunk-size.
>
> No I am not. Because my process can handle 100 transactions per second a
> queue length of 100 transactions is reasonable. I will just have to
> adjust my design I guess, possibly use another form of IPC.


Why would you keep 100 "transactions" (actually they are just messages)
stored "in transit" in a system buffer ?

If you want them buffered, then let the reader buffer.

AvK
From: Peter Olcott on

"Moi" <root(a)invalid.address.org> wrote in message
news:a56f3$4bc0a145$5350c024$5576(a)cache100.multikabel.net...
> On Sat, 10 Apr 2010 10:50:58 -0500, Peter Olcott wrote:
>
>> "Moi" <root(a)invalid.address.org> wrote in message
>>
>>>
>>> cat /dev/zero | dd | >dev/null
>>>
>>> What would happen if the kernel would have unlimited
>>> bufferspace ?
>>>
>>> Also; you are confusing "throughput" (tps) with
>>> chunk-size.
>>
>> No I am not. Because my process can handle 100
>> transactions per second a
>> queue length of 100 transactions is reasonable. I will
>> just have to
>> adjust my design I guess, possibly use another form of
>> IPC.
>
>
> Why would you keep 100 "transactions" (actually they are
> just messages)
> stored "in transit" in a system buffer ?
>

So that I would not have to do the buffering myself.

> If you want them buffered, then let the reader buffer.

I guess that's my only choice.

>
> AvK


From: Moi on
On Sat, 10 Apr 2010 11:08:06 -0500, Peter Olcott wrote:

> "Moi" <root(a)invalid.address.org> wrote in message
> news:a56f3$4bc0a145$5350c024$5576(a)cache100.multikabel.net...

>>
>>
>> Why would you keep 100 "transactions" (actually they are just messages)
>> stored "in transit" in a system buffer ?
>>
>>
> So that I would not have to do the buffering myself.
>
>> If you want them buffered, then let the reader buffer.
>
> I guess that's my only choice.

No, the other choice is to let the writer block.
(which is not as bad as it seems; it makes little sense to
accept work that you cannot handle (yet) )

And there still is the choice of a spooldir, which offers you persistence
and atomicity for free. Plus high-capacity ;-)

AvK
First  |  Prev  |  Next  |  Last
Pages: 1 2 3 4 5 6 7 8
Prev: Flushing file writes to disk with 100% reliability
Next: SPAM