From: Peter Olcott on

"Joseph M. Newcomer" <newcomer(a)flounder.com> wrote in
message news:ialur5h9urt8rlg8h6k1i7a3lpsusojieo(a)4ax.com...
> See below...
> On Thu, 8 Apr 2010 19:58:33 -0500, "Peter Olcott"
> <NoSpam(a)OCR4Screen.com> wrote:
>
>>> All sorts of methods, beginning with a simple straight
>>> shared file.
>>
>>I am beginning with a simple shared file. The only purpose
>>of the other IPC is to inform the process of the event
>>that
>>the file has been updated at file offset X, without the
>>need
>>for the process to poll for updates. File offset X will
>>directly pertain to a specific process queue.
> ****
> Ohh, so you NO LONGER CARE about either power failure or
> operating system crash?

Hector thinks its a good idea and you don't think that it is
a good idea so I can't go by credibility because although
you have lots of experience and a PhD, Hector has more
experience and more recent experience in this area. Because
of this I weight your credibility equal with his, thus I
need a tie breaker.

The obvious tie breaker (the one that I always count on) is
complete and sound reasoning. From the sound reasoning that
you have provided, this would go against your point of view.
I know from the SQLite design pattern how to make disk write
100% reliable. I also know all about transactions. By
knowing these two things a simple binary file can be made to
protect against power failures.

I have brought up another issue several times that you have
not yet addressed. Nothing that you have said would protect
against a OS crash that overwrites the wonderfully
transactional fully flushed buffers transaction database
with garbage data.

>
> A "simple shared file" raises so many red flags that I
> cannot begin to say that "this is
> going to be a complete disaster if there is any failure of
> the app or the operating
> system"

With fully flushed buffers and a journal file this can not
be a problem because that is all that can be done.

>
> But hey, if it gives the illlusion of working under ideal
> conditions, it MUST be robust
> and reliable under all known failure modes, right?

Overwriting the file or database or whatever with garbage
data because of an OS crash continues to not be addressed.

> Well, if you are comparing apples and chocolate cupcakes,
> they are pretty much the same.
> Any comparison of linx "named pipes" to Windows "named
> pipes" has to take into
> consideration that they are TOTALLY DIFFERENT mechanisms.
> Shall I tell my set of Unix
> secuirty jokes, or just say "Unix security", which is a
> joke all by itself? So I tend to
> not find ANY comparisons valid. They are two completely
> different systems, which look
> alike only if you stand back a few hundred feet and
> squint. (Windows has a file system;
> linux has a file system; MS-DOS had a file system. They
> are identical only insofar as
> they allow a program to name sequences of bytes stored on
> a disk. But I've NEVER lost a
> file in a Windows crash, and it was common to lose a file,
> and EVERY TRACE of the file, on
> a Unix crash, to the point where I always kept a separate
> directory of files in the hopes
> it would survive the crash. I lost far too many hours due
> to the unreliability of the
> Unix "file system" (if one can dignity anything so
> unreliable with that name). But since
> you know that the file system is utterly reliable, good
> luck.
> joe

Mission critical apps at AFWA trust Unix. Windows always
take ten times as long to learn anything because they had a
team of a dozen experts assigned to making the design as
convoluted as possible. I need to add Unix to my skill set
to remain employable.

>>Its looking more like four processes with one having much
>>more priority than the others each reading from one of
>>four
>>FIFO queues.
>>(1) Paying customer small job (one page of data) This is
>>the 10 ms job
>>(2) Paying customer large job (more than one page of data)
>>(3) Building a new recognizer
>>(4) Free trial customer
> ****
> As I pointed out earlier, mucking around with thread
> priorities is very, very dangerous

You did not even pay attention to what I just said. The
Unix/Linux people said the same thing about thread
priorities, that is why I switched to independnet processes
that have zero dependency upon each other.

> and should NOT be used as a method to handle load
> balancing. I would use a different
> approach, such as a single queue kept in sorted order, and
> because the free trial jobs are
> small (rejecting any larger jobs) there should not be a
> problem with priority inversion.

Four processes, not threads. Since there is zero dependency
upon each other there is no change of priority inversion.



From: Peter Olcott on

"Joseph M. Newcomer" <newcomer(a)flounder.com> wrote in
message news:4lmur51ea3nju0dnl7ms6vcurv9f0q9nlc(a)4ax.com...
> See below...
> On Thu, 08 Apr 2010 22:16:14 -0400, Hector Santos
> <sant9442(a)nospam.gmail.com> wrote:
>
>>> Some of the above have fixed queue lengths don't they?
>>
>>
>>No, because the question doesn't apply and I doubt you
>>understand it,
>>because you have a very primitive understanding of queuing
>>concepts.
>>No matter what is stated, you don't seem to go beyond a
>>basic layman
>>abstract thinking - FIFO. And your idea of how this
>>"simplicity" is
>>applied is flawed because of the lack of basic
>>understanding.
> ***
> Note that I agree absolutely with this! The concept that
> a fixed-sized queue matters at
> all shows a total cluelessness.

Bullshit. At least one of the queuing models discards input
when queue length exceeds some limit.

>>There were plenty of links where people had issues - even
>>for LINUS
> ****
> If you ignore the issue of what happens if either side of
> the pipe fail, or the operating
> system crashes. But hey, reliability is not NEARLY as
> important as having buffer lengths
> that grow (if this is actually true of linux named pipes).
> This is part of the Magic
> Morphing Requirements, where "reliability" got replaced
> with "pipes that don't have fixed
> buffer sizes".
> ****

The other issue is reentrancy. I remember from my MS-DOS
(ISR) interrupt service routine development that some
processes are occasionally in states that can not be
interrupted. One of these states is file I/O. Now the whole
issue of critical sections and other locking issues has to
be dealt with. A simple FIFO made using a named pipe
bypasses these issues.

>>
>>For what you want to use it for, my engineering sense
>>based on
>>experience tells me you will have problems, especially YOU
>>for this
>>flawed design of yours. Now you have 4 Named Pipes that
>>you have to
>>manage. Is that under 4 threads? But you are not
>>designing for
>>threads.

That right I discarded threads in favor of processes a long
time ago.

> the message yes, another no. Is the 1 OCR process going to
>>handle all four pipes? Or 4 OCR processes? Does each
>>OCR have their
>>own Web Server? Did you work out how the listening
>>servers will bind
>>the IPs? Are you using virtual domains? sub-domains?
>>Multi-home IP
>>machine?

One web server (by its own design) has multiple threads that
communication with four OCR processes (not threads) using
some form of IPC, currently Unix/Linux named pipes.

> ****
> The implmenetation proposals have so many holes in it that
> they would be a puttter's
> dream, or possibly a product of Switzerland. This design
> guarantees maxium conflict for

And yet you continue to fail to point out the nature of
these holes using sound reasoning. I correctly address the
possible reasoning, and you simply ignore what I say.

> resources and maximum unused resources, but what does
> maximum resource utilization and
> minimum respons time have to do with the design? It
> guarantees priority inversion,

Yeah it sure does when you critique your misconception of my
design instead of the design itself. I use the term
PROCESSES and you read the term THREADS. Please read what I
actually say, not what you expect that I will say or have
said.


From: Peter Olcott on

"Joseph M. Newcomer" <newcomer(a)flounder.com> wrote in
message news:mmnur5122gmolfpaf17h0d7088tlkuv1lg(a)4ax.com...
> See below...
> On Thu, 8 Apr 2010 21:51:37 -0500, "Peter Olcott"
> <NoSpam(a)OCR4Screen.com> wrote:
>
>>(1) One web server that inherently has by its own design
>>one
>>thread per HTTP request
>>(2) Four named pipes corresponding to four OCR processes,
>>one of these has much higher process priority than the
>>rest.
> ***
> In other words, a design which GUARATEES maximum response
> time and fails utterly to
> provide ANY form of concurrency on important requests!
> WOW! Let's see if it is possible
> to create an even WORSE design (because it is so much
> easier to create a better design
> that is no fun)
> ****
>>(3) The web server threads place items in each of the FIFO
>>queues.
> ***
> Silly. A priority-ordered queue with
> anti-priority-inversion policies makes a LOT more
> sense!
> ****
>>(4) The OCR processes work on one job at a time from each
>>of
>>the four queues.
> ****
> Let's see, can we make this worse? I don't see how, given
> how bad a design this is, but
> why take the challenge away? Or, make a better design:
> multiple servers, a single
> priority-ordered queue. No, that is too simple and too
> obvious! All it does is minimize
> response time and maximize concurrency, and what possible
> value could that have in a
> design?

Oh right now I know why that wont work. the 3.5 minute jobs
(whenever they occur) would completely kill my 100 MS goal
for the high priority jobs. I could keep killing these jobs
whenever a high priority job arrives, but, I might not ever
have a whole 3.5 minutes completely free so this 3.5 minute
jobs may never get executed. Giving these 3.5 minutes jobs a
tiny time slice is the simplest way to solve this problem.

A possibly better way to handle this would be to have the
3.5 minute job completely yield to the high priority jobs,
and then pick up exactly where it left off.

>>Just the pipe name itself it part of the disk, nothing
>>else
>>hits the disk. There are many messages about this on the
>>Unix/Linux groups, I stated a whole thread on this:
> ****
> And the pipe grows until what point? It runs out of
> memory? Ohh, this is a new
> interpretation of "unlimited pipe growth" of which I have
> been previously unaware!

It does not ever simply discard input at some arbitrary
queue length such as five items. One of the FIFO models did
just that.


From: Peter Olcott on

"Joseph M. Newcomer" <newcomer(a)flounder.com> wrote in
message news:a3our5t87p15br54emp6mnuo2eg3pudcb8(a)4ax.com...
> On Thu, 8 Apr 2010 21:10:25 -0500, "Peter Olcott"
> <NoSpam(a)OCR4Screen.com> wrote:
>
>>
>>Yes that is it. I don't even acknowledge receipt of the
>>request until it is committed to the transaction log.
>>Anything at all that prevents this write also prevents the
>>acknowledgement of receipt. So basically I never say "I
>>heard you" until the point where nothing can prevent
>>completing the transaction.
> ****
> OK, this is a good specification. I'm not sure how the
> current proposal, which doesn't
> have anything resembling a reliable log, accomplishes it.
> ****

OK so I have to specify every single minuscule little detail
step by step item by item to let you know that I know how to
make a reliable log file? I will simply use what I have
referred to as the SQLite design pattern.

It seems that the most difficult aspect of this is to make
sure that each and every one of the every kind of buffer is
completely flushed to the disk platters. The difficult part
of this is actually turning off the drive's write cache,
when there may not be any software (hard disk driver) that
actually does this, and making sure the fsync() is not
broken as it often is, and making sure that fsync() is
applied everywhere that it needs to be applied which is at
least the file and the directory.

If I can make sure of these things and follow the SQLite
journaling design pattern then I can make reliable
transactions. If fsync() is broken and/or the drive's write
cache can't be turned off, then all of the great safe
transaction advice that you have provided becomes moot. If
you can't flush the buffers then safe transactions can't be
made.


>>Then in the event that I do not receive the HTTP
>>acknowledgement of final receipt of the output data, I
>>roll
>>the whole transaction back. If the reason for the failure
>>is
>>anything at all on my end I roll the charges back, but,
>>let
>>the customer keep the output data for free. If the
>>connection was lost, then this data is waiting for them
>>the
>>next time they log in.
> ****
> And you guarantee this exactly HOW? Oh yes, with the
> transacted database (I thought this

No there is more to it than that. I have to have explicit
acknowledgement from the client.

> had been eliminated from the design). And you have
> designed the recovery code? You have
> the state machine diagram of the entire workflow and know
> what happens at each of the
> cut-points where failure can occur (which is essentially
> at any arc of the DFA)? I don't
> recall any acknowledgement that this was part of your
> implmentation design.
> ****
>>
>>One of the Linux/Unix people is recommending MySQL InnoDB
>>storage engine because it has very good crash recovery.
> ****
> Crash recovery of the database is NOT the same as having a
> recovery policy for your
> workflow; all it guarantees is a certain amount of trust
> of what is in the database, What
> you DO with that information is what is crtical! When you
> discover the database
> accurarely reflects a state of handling a request, you
> have to have an idea of what you
> are going to do for EVERY such state that is accurately
> reflected in the database!
> *****
>>
> Joseph M. Newcomer [MVP]
> email: newcomer(a)flounder.com
> Web: http://www.flounder.com
> MVP Tips: http://www.flounder.com/mvp_tips.htm


From: Peter Olcott on

"Joseph M. Newcomer" <newcomer(a)flounder.com> wrote in
message news:leour5hk0g6agvg2bq0ubgcve3riso0p9q(a)4ax.com...
> See beklow...
> On Thu, 8 Apr 2010 20:40:34 -0500, "Peter Olcott"
> <NoSpam(a)OCR4Screen.com> wrote:
>
>>Solution is new vendor where disk caching can be turned
>>off.
>>Vendor says that disk caching can be turned off. Vendor
>>rep
>>may be guessing.
> ****
> Note also that the existence of an ATAPI command to invoke
> an action does not guarantee
> the existence of an API that will send that ATAPI command.
> So you need a guarantee that
> the OS and/or the DBMS can actually activate this feature!
>
> We discovered that even though our system wanted to take
> advantage of certain features of
> a particular vendor's disk drive, we could not invoke them
> (for example, the SCSI
> pass-through was broken in the SCSI device driver!). So
> EVERY component of the system,
> from the application through the OS through the low-level
> disk drivers through the
> hardware on the disk drive must support the ability to
> invoke some state in the hardware.

Yes. I think that the drives might be SCSI, I could not even
verify this yet.

>>But required OS reboots are, right? Still need all writes
>>to
>>go straight to the platters.
>>
> ****
> Note that when the OS reboots, the reboot procedure has,
> as one of its effects, the
> completion of pending writes to the disk. When you say
> "required reboot", or course, you
> are referring to the kinds of reboots that happen after
> updates to software, or any other

Nope. That part is easy to deal with. The one that I am
talking about is like the MS Windows blue screen of death
system hang.


>>Good reason for hot swappable RAID, then.
> ****
> I presume you mean RAID-5. And that you will maintain a
> set of spare hard drives in their
> carriers for this contingency (I do)

Nope RAID 5 costs $550, RAID-1 costs $75.

>>Make sure the flush to disk then.
> ****
> The OS just crashed. Or your unlikely power-failure
> scenario just happened. So the files
> are flushed to disk exactly HOW?

Simple every single write it always completely flushed to
disk immediately, and the SQLite design pattern (journaling)
involving keeping an explicit audit trail every single disk
write. Now a crash only loses the current pending
transaction, and the tiny bit of garbage data can be
deleted.

>>I already solved this issue with my very early design.
>>That
>>is the purpose of my persistent disk file based FIFO
>>queue.
>>As soon as it gets to this file, then even a server crash
>>will not prevent the job from getting completed correctly.
>>We may lose the way to send it back to the user's screen
>>(it
>>will still be in their account when they log in) but we
>>did
>>not lose the actual transaction even if the server
>>crashes.
> ****
> As I recall, because SQLLITE could not have record numbers
> and do a seek, you abandoned
> the persisten disk-based FIFO queue in favor of named
> pipes (which have ZERO robusteness
> under most failure scenarios, but can add infinite amounts
> of kernel memory so they can
> keep growing, so they have some advantages; run out of
> memory, just create a pipe and
> write to it until you get more memory added by magic)

I have stated my design so many times and you still don't
even know what I said?
A single transaction log file is the FIFO queue, and the
named pipes merely notify the processes of the offset within
this file of the relevant change.

> In fact, the last I saw, you had FOUR of these queues, all
> badly designed, all working
> with processes that were using the worst possible approach
> to handling prioritization,
> minimizing concurrency, maximizing response time. Not
> clear that this is forward
> progress.

Yet you have yet to explain this critique in terms of
reasoning. Much of the reasoning that you do provide is a
critique of your misconception of my design rather than the
design itself. I explained using reasoning why your idea of
a single priority queue would not work. You don't explain
yourself nearly enough.

If you explained yourself better I would be able to more
easily correct your misconceptions and you would be better
able to correct my misconceptions. I might not have the
right conception of a priority queue. The intuitive
conception of a priority queue will not work.

>>Alternatively if we lose any part of the process before we
>>get an HTTP acknowledgement that they received their
>>results, we roll the whole transaction back.
> ****
> Actually, you either lose all of a process, or none of it.
> What you are trying to say, I
> think, is that if you suffer a failure at any point in the
> workflow state machine, there
> is some totally magical means that gets you back to the
> magically constructed recovery
> software that restarts the workflow at some point.

Not magical at all, but, I am getting very tired of
constantly repeating myself and you continuing to read what
you thought that I said instead of what I actually said. I
explained exactly how this would work a dozen times, and you
never once explained why it would not work with complete
consistent and sound reasoning.

I know that you are very bright and knowledgeable man. I
know that you know much more about these things that I do.
Please lets make this more of a dialogue and much less of a
debate.

>
> I love this approach. Sadly, it would not work for me,
> because my customers actually want
> results. But I defrauded them by billing for many hours
> spent solving these problems,
> when I could have just waved my magic wand and gotten a
> solution that worked!
> joe
>
>>
>>> joe
>>> ****
>>>>
>>>>
>>> Joseph M. Newcomer [MVP]
>>> email: newcomer(a)flounder.com
>>> Web: http://www.flounder.com
>>> MVP Tips: http://www.flounder.com/mvp_tips.htm
>>
> Joseph M. Newcomer [MVP]
> email: newcomer(a)flounder.com
> Web: http://www.flounder.com
> MVP Tips: http://www.flounder.com/mvp_tips.htm