From: Peter Olcott on

"Liviu" <lab2k1(a)gmail.c0m> wrote in message
news:eUt13uGzKHA.5332(a)TK2MSFTNGP02.phx.gbl...
> "Pete Delgado" <Peter.Delgado(a)NoSpam.com> wrote...
>>
>> I believe that your interpretation of "fault tolerance"
>> is that a
>> catastrophic event could happen to your system and you
>> application
>> would not lose *any* data. Is this the definition that
>> you are using?
>

Yes this is exactly what I am going for, especially customer
transaction data. This is probably the only fault tolerance
that I will be implementing.

> Absent any catastrophic events, a system might still be
> called
> "fault tolerant" if it managed at least one successful run
> under
> controlled conditions on developer's machine, despite all
> faults
> with its design and implementation ;-)
>
> Liviu
>
>


From: Hector Santos on
Peter Olcott wrote:


>> Your method does not scale; our suggestions give you
>> scalability, maximize throughput, and
>> probably makes it possible to meet your wonderful 500ms
>> goal consistently.
>> joe
>
> Yet you have consistently, time after time, again and again
> never backed this up with reasoning. As soon as you back
> this up with verifiably correct reasoning, thenn (then and
> only then) will I agree. I absolutely and positively reject
> dogma as a source of truth.


You need to stop. We told you the reasons where MMF applies and we
told you YOU can't avoid it anyway because the SYSTEM uses
virtualization anyway. You can't avoid it. It is BY CHANCE that you
have yet to see any real operational thresholds and boundary
conditions because you haven't tested it.

I am glad I was able to prove to you how threads using shared process
memory is better than your multiple single process redundant loading.
(You are welcome).

But it also MMF was mostly based on you first wanted to use single
processes. I suggested a SHARED MMF DLL file which will also both
design models and also give you a fast startup time versus what you
have now of X minutes to start up.

But now you bring in disaster recover and wonder how it can be
reliable. Now you can't avoid considering minimizing your caching and
buffer I/O that benefits your speed. Now MMF plays a bigger role to
give you a "mix" where tuning factors come into play.

The things is MMF works really well if its READ ONLY huge data. If
you plan to write data, then it helps because of caching and delayed
commits under solid computer environments where power is reliable, but
it raises the opportunity for lost when disaster occurs.

Can't have it both ways petero.

The only real way, I can see, to avoid this is to use FLASH MEMORY as
RAM, but then you slow speed. Sorry.

--
HLS
From: Liviu on
"Peter Olcott" <NoSpam(a)OCR4Screen.com> wrote...
> "Liviu" <lab2k1(a)gmail.c0m> wrote...
>> "Pete Delgado" <Peter.Delgado(a)NoSpam.com> wrote...
>>>
>>> I believe that your interpretation of "fault tolerance" is that
>>> a catastrophic event could happen to your system and you
>>> application would not lose *any* data. Is this the definition
>>> that you are using?
>
> Yes this is exactly what I am going for, especially customer
> transaction data. This is probably the only fault tolerance that I
> will be implementing.

You were replying to Pete Delgado, but threaded it under my post.
Oh well, that must be one of those "tedious little details" unworthy
of your attention. Anyway, since we are here...

You must realize of course that fault tolerance comes with a price,
and transactional databases weigh heavily in terms of disk caching
and virtual memory usage. If your naive idea that "my app is the
only thing running and will command all RAM to itself" was wrong
to begin with, on too many levels to count, yet it is even more wrong
once you add a database to the picture.

Liviu


From: Peter Olcott on

"Hector Santos" <sant9442(a)nospam.gmail.com> wrote in message
news:uhRD9$HzKHA.1064(a)TK2MSFTNGP04.phx.gbl...
> Peter Olcott wrote:
>
>
>>> Your method does not scale; our suggestions give you
>>> scalability, maximize throughput, and
>>> probably makes it possible to meet your wonderful 500ms
>>> goal consistently.
>>> joe
>>
>> Yet you have consistently, time after time, again and
>> again never backed this up with reasoning. As soon as you
>> back this up with verifiably correct reasoning, thenn
>> (then and only then) will I agree. I absolutely and
>> positively reject dogma as a source of truth.
>
>
> You need to stop. We told you the reasons where MMF
> applies and we told you YOU can't avoid it anyway because
> the SYSTEM uses virtualization anyway. You can't avoid
> it. It is BY CHANCE that you have yet to see any real
> operational thresholds and boundary conditions because you
> haven't tested it.

The tests passed on Linux too, no page faults after an hour.
I will try it again all night.

>
> I am glad I was able to prove to you how threads using
> shared process memory is better than your multiple single
> process redundant loading. (You are welcome).

How many times do I have to tell you that I will not be
using multiple single process redundant loading before you
get it?

>
> But it also MMF was mostly based on you first wanted to
> use single processes. I suggested a SHARED MMF DLL file
> which will also both design models and also give you a
> fast startup time versus what you have now of X minutes to
> start up.

X minutes of start up once. Not once a day or once a week,
just once.

>
> But now you bring in disaster recover and wonder how it
> can be reliable. Now you can't avoid considering
> minimizing your caching and buffer I/O that benefits your
> speed. Now MMF plays a bigger role to give you a "mix"
> where tuning factors come into play.

Pete pointed out that I am really only concerned with not
losing any data. That is correct. Also the only data that I
am concerned with losing is customer transaction data.
Everything else can be reconstructed easily enough. I don't
have to handle this myself, the database provider can handle
this for me.

>
> The things is MMF works really well if its READ ONLY huge
> data. If you plan to write data, then it helps because of
> caching and delayed commits under solid computer
> environments where power is reliable, but it raises the
> opportunity for lost when disaster occurs.

I thought that you said in another posting the MMF enables
disaster recovery?

>
> Can't have it both ways petero.
>
> The only real way, I can see, to avoid this is to use
> FLASH MEMORY as RAM, but then you slow speed. Sorry.
>
> --
> HLS


From: Peter Olcott on

"Liviu" <lab2k1(a)gmail.c0m> wrote in message
news:e%23uwbJIzKHA.5936(a)TK2MSFTNGP04.phx.gbl...
> "Peter Olcott" <NoSpam(a)OCR4Screen.com> wrote...
>> "Liviu" <lab2k1(a)gmail.c0m> wrote...
>>> "Pete Delgado" <Peter.Delgado(a)NoSpam.com> wrote...
>>>>
>>>> I believe that your interpretation of "fault tolerance"
>>>> is that
>>>> a catastrophic event could happen to your system and
>>>> you
>>>> application would not lose *any* data. Is this the
>>>> definition
>>>> that you are using?
>>
>> Yes this is exactly what I am going for, especially
>> customer
>> transaction data. This is probably the only fault
>> tolerance that I
>> will be implementing.
>
> You were replying to Pete Delgado, but threaded it under
> my post.

His post got lost. It never showed up in my outlook express.

> Oh well, that must be one of those "tedious little
> details" unworthy
> of your attention. Anyway, since we are here...
>
> You must realize of course that fault tolerance comes with
> a price,
> and transactional databases weigh heavily in terms of disk
> caching
> and virtual memory usage. If your naive idea that "my app
> is the
> only thing running and will command all RAM to itself" was
> wrong
> to begin with, on too many levels to count, yet it is even
> more wrong
> once you add a database to the picture.
>
> Liviu
>
>

For all practical purposes this will still be true. The
database load will be negligible compared to the OCR
processing load. Something like 100K of data compared to 4
GB of data.