From: Peter Olcott on

"Jasen Betts" <jasen(a)xnet.co.nz> wrote in message
news:hps2v2$nta$1(a)reversiblemaps.ath.cx...
> On 2010-04-10, Peter Olcott <NoSpam(a)OCR4Screen.com> wrote:
>
>>
>> My process can handle 100 transactions per second. I was
>> envisioning a FIFO at least this long.
>
> If your process can handle 0.0001 transactions per
> microsecond.
>
> how big do you need the buffer to be?
>
>
> --- news://freenews.netfront.net/ - complaints:
> news(a)netfront.net ---

You can't make the decision on this basis. The basis that
this decision needs to be decided upon is the unpredictable
pattern of jobs that will be experienced in the future. This
can range anywhere from getting 100,000 jobs all at the same
minute once a day, and no other jobs any other time, to a
completely even flow of maximum processing capability 24/7.
Because of this is would be best to form a FIFO queue of
arbitrary length.


From: hsantos on
On Apr 8, 7:32 pm, Ian Collins <ian-n...(a)hotmail.com> wrote:
> On 04/ 9/10 11:12 AM, Peter Olcott wrote:
>
> > "Ian Collins"<ian-n...(a)hotmail.com> wrote in message
> >news:82726lFp8jU7(a)mid.individual.net...
> >> On 04/ 9/10 04:58 AM, Peter Olcott wrote:
> >>> Are there any reliability issues or other caveats with
> >>> using
> >>> named pipes?
>
> >> In what context?
>
> > One respondent in another group said that named pipes are
> > inherently very unreliable and I think that he switched to
> > some sort of sockets. In any case he did not choose named
> > pipes for his IPC because of significant reliability issues
> > that he encountered.
>
> Care to cite a reference?

Ian,

As you all know how "Olcott Threads" inevitably become prolonged,
mangled, twisted and lost. The recent Wndows threads he started were
no different. The problem is that he ask questions, and as you know
will always argue the input provided, when in fact, he has shown he
knows nothing about what he is asking or gets it all wrong. There is
no experimentation, exploring on his part. For him, the presumption
is that you are wrong with your input unless he has basic
understanding of what you are talking about. The problem is he often
does not.

Named pipes do work reliable under Windows and IMTE, for certain low
end types of applications. When the loads are higher, which was what
Olcott was wishing with an overly optimistic throughput requirement,
I was reflecting that since he has shown a lack of experience to
program for named pipes which can be very complex, especially what for
he was asking to do, can be very unreliable in high throughput and
network scenarios. For our product line, we considered Named Piped
for the RPC protocol. It proved to not work very well, it didn't
scale, there were lock ups and bottle necks. Switching to a socket
RPC bind, resolved this high end throughput and networking issues.

I suggested, for HIM, that he should look at other ways and cited
various methods to explore, including a simplistic log file because he
was getting crazy with new requirements of having 100% "crash
recovery" and "Fault Tolerance." He didn't want and lost, so I told
him just use disk files with no caching (as much as he can turn of or
flush often). But every idea was throw to him. I even cited
Microsoft's own recommendation (sorry, don't have the link off hand)
to considers sockets, not named pipes, when high throughput and
networking may be important. I also provided a DR DOBBS 2003 article
with a C++ Named Pipe class discussion how easy it can be when you can
implement it right taking into account, error trappings,
sychronization, buffering, etc.

If you follow Olcotts messages/threads of late and all the huge thread
growth with nearly all of them ending the same way, the guy wants
complexity yet simplicity with everything running with perpertual
motion in a resistance-less world. Pure Memory, No Virtual Memory,
yet he wants to load memory beyond the OS effective process limits
with no sharing, No Disk Caching yet he wants 100% crash recovery - no
lost of data and the most important of all, is what nearly a dozen
experts, scientist and engineers have stated his design is crippled
for his desired throughput.

Putting Windows/Linux aside, what he wants is:

Multi-Thread Web Server --> 4 EXE processes each with 1 FIFO
queue

He has trouble designing his own EXE for threads. He only got the
Multi-thread web server because he found Mongoose which he statedd he
would embedded to each EXE. So right there he is confused with
citing a different model:

4 EXEs each Mongoose Web Server

In prolong threads, we tried to get him to understand that he has 4
different web servers now, each with their own URL or IP or whatever.
He might consider using a web proxy perhaps unless he going to use 4
different Web Forms each with their own URL. I don't think he has yet
to understand how 4 EXE Web Server model alters his client side
designs.

But regardless of the layout, he was also not getting that his
suggested rate was telling him otherwise on how many Handlers were
required. I provided a simple formula which is a variation of
Little's Law:

TPS = N * 1000 / WT

where

TPS is transactions per second
N is number of handlers
WT is the worktime to complete a single transaction in msecs.

He indicated:

TPS = 100 jobs per sec
WT = 100 ms Maximum Work Time per job

Therefore, on this basis, for the worst case, he needs a minimum of 10
handlers, regardless of how the handlers are spread; one machine or
multi-machines, one exe with multiple threads or multple exes, etc.

But he was trying to do this with 4 exes, and he wants each EXE to be
totally separated per job task. He cited he wanted no contention
between the four EXE, no data sharing, total autonomous, yet, he kept
talking of needs to do scaling, thread/process priority changles to
help balance the single high priority EXE by putting the others to
sleep.

Which is all fine and good, but based on his confused, changing by the
day design, back and forth, in my experience and others that been
participating in is vapor project of his, is flawed and will not work.

Finally, just like one would expect in a Linux techncal forum, when
anyone post in Windows forum, they should expect to get Windows based
answers, and even then generality was provided to him. I'm not a
Linux expert, but I'm sure Linux also promotes common sense
engineering and the same time of time-based sound design basic
principles just like under Windows. For his case,

- Use Shared Memory
- Use Memory Maps
- Use Thread Worker Pools
- Use IOCP for scaling

and foremost don't re-invent the wheel, use the power of the computer
which he is totally underestimating, especially with thread designs.
Even then, I cited how some vendors are moving to a process-centric
design such as Chrome and IE9 browsers to provide better browser
reliability when many web sites are opened.

So everything was discussed with him. The benefit of the doubt was
given to him with every idea throw out to him, even with a flawed
Many Thread To One Thread FIFO design.

The bottom line he doesn't wish to explore, test and measure on his
own regardless of what is suggested. He is looking for proof and
theory and even when thats provided, he continues to rationalizes the
opposite or incorrect conclusion of whats stated, written and long
published to suit his purpose.

Go Figure

Hector Santos/CTO
http://www.santronics.com