From: Peter Olcott on

"Pete Delgado" <Peter.Delgado(a)NoSpam.com> wrote in message
news:efX%238fqyKHA.5360(a)TK2MSFTNGP06.phx.gbl...
>
> "Peter Olcott" <NoSpam(a)OCR4Screen.com> wrote in message
> news:AeidnYxrl7T0vzXWnZ2dnUVZ_judnZ2d(a)giganews.com...
>>
>> I don't want to hear about memory mapped files because I
>> don't want to hear about optimizing virtual memory usage
>> because I don't want to hear about virtual memory until
>> it is proven beyond all possible doubt that my process
>> does not (and can not be made to be) resident in actual
>> RAM all the time.
>
> From my understanding of your "test" (simply viewing the
> number of page faults reported by task manager) you can
> only conclude that there have not been any significant
> page faults since your application loaded the data, not
> that your application and data have remined in main
> memory. If you actually attempt to access all of your code
> and data and there are no page faults, I would be very
> surprised. In fact, knowing what I do about the cache
> management in Windows 7, I'm very surprised that you are
> not seeing any page faults at all unless you have disabled
> the caching service.
>
>>
>> Since a test showed that my process did remain in actual
>> RAM for at least twelve hours,
>
> No. That is not what your simple test showed unless your
> actual test differed significantly from what you expressed
> here.
>
> -Pete
>
(1) I loaded my process
(2) I loaded my process data
(3) I waited twelve hours
(4) I executed my process using its loaded data, and there
were no page faults reported by the process monitor
(5) Therefore my process data remained entirely resident in
actual RAM for at least twelve hours.


From: Pete Delgado on

"Peter Olcott" <NoSpam(a)OCR4Screen.com> wrote in message
news:kMednXfoL9CQaTXWnZ2dnUVZ_tmdnZ2d(a)giganews.com...
>
> "Pete Delgado" <Peter.Delgado(a)NoSpam.com> wrote in message
> news:efX%238fqyKHA.5360(a)TK2MSFTNGP06.phx.gbl...
>>
>> "Peter Olcott" <NoSpam(a)OCR4Screen.com> wrote in message
>> news:AeidnYxrl7T0vzXWnZ2dnUVZ_judnZ2d(a)giganews.com...
>>>
>>> I don't want to hear about memory mapped files because I don't want to
>>> hear about optimizing virtual memory usage because I don't want to hear
>>> about virtual memory until it is proven beyond all possible doubt that
>>> my process does not (and can not be made to be) resident in actual RAM
>>> all the time.
>>
>> From my understanding of your "test" (simply viewing the number of page
>> faults reported by task manager) you can only conclude that there have
>> not been any significant page faults since your application loaded the
>> data, not that your application and data have remined in main memory. If
>> you actually attempt to access all of your code and data and there are no
>> page faults, I would be very surprised. In fact, knowing what I do about
>> the cache management in Windows 7, I'm very surprised that you are not
>> seeing any page faults at all unless you have disabled the caching
>> service.
>>
>>>
>>> Since a test showed that my process did remain in actual RAM for at
>>> least twelve hours,
>>
>> No. That is not what your simple test showed unless your actual test
>> differed significantly from what you expressed here.
>>
>> -Pete
>>
> (1) I loaded my process
> (2) I loaded my process data
> (3) I waited twelve hours
> (4) I executed my process using its loaded data, and there were no page
> faults reported by the process monitor
> (5) Therefore my process data remained entirely resident in actual RAM for
> at least twelve hours.

What program is "process monitor"? Are you referring to the Sysinternals
tool or are you referring to Task Manager or Resource Monitor?

-Pete


From: Hector Santos on
Peter Olcott wrote:

> I still think that the FIFO queue is a good idea. Now I will
> have multiple requests and on multi-core machines multiple
> servers.


IMO, it just that its an odd approach to load balancing. You are
integrating software components, like a web server with an
multi-thread ready listening server and you are hampering it with a
single thread only FIFO queuing. It introduces other design
considerations. Namely, you will need to consider a store and forward
concept for your request and delayed responses. But if your request
processing is very fast, maybe you don't need to worry about it.

In practice the "FIFO" would be at the socket level or listening level
with concepts dealing with load balancing by restricting and balancing
your connection with worker pools or simply letting it to wait knowing
that processing won't be too long. Some servers have guidelines for
waiting limits. For the WEB, I am not recall coming across any
specific guideline other than a practical one per implementation. The
point is you don't want the customers waiting too long - but what is
"too long."

> What is your best suggestion for how I can implement the
> FIFO queue?
> (1) I want it to be very fast
> (2) I want it to be portable across Unix / Linux / Windows,
> and maybe even Mac OS X
> (3) I want it to be as robust and fault tolerant as
> possible.


Any good collection class will do as long as you wrap it with
synchronization. Example:


typedef struct _tagTSlaveData {
... data per request.........
} TSlaveData;

class CBucket : public std::list<TSlaveData>
{
public:
CBucket() { InitializeCriticalSection(&cs); }
~CBucket() { DeleteCriticalSection(&cs); }

void Add( const TSlaveData &o )
{
EnterCriticalSection(&cs);
insert(end(), o );
LeaveCriticalSection(&cs);
}

BOOL Fetch(TSlaveData &o)
{
EnterCriticalSection(&cs);
BOOL res = !empty();
if (res) {
o = front();
pop_front();
}
LeaveCriticalSection(&cs);
return res;
}
private:
CRITICAL_SECTION cs;
} Bucket;




--
HLS
From: Peter Olcott on

"Pete Delgado" <Peter.Delgado(a)NoSpam.com> wrote in message
news:Ouu4JOryKHA.1236(a)TK2MSFTNGP06.phx.gbl...
>
> "Peter Olcott" <NoSpam(a)OCR4Screen.com> wrote in message
> news:kMednXfoL9CQaTXWnZ2dnUVZ_tmdnZ2d(a)giganews.com...
>>
>> "Pete Delgado" <Peter.Delgado(a)NoSpam.com> wrote in
>> message news:efX%238fqyKHA.5360(a)TK2MSFTNGP06.phx.gbl...
>>>
>>> "Peter Olcott" <NoSpam(a)OCR4Screen.com> wrote in message
>>> news:AeidnYxrl7T0vzXWnZ2dnUVZ_judnZ2d(a)giganews.com...
>>>>
>>>> I don't want to hear about memory mapped files because
>>>> I don't want to hear about optimizing virtual memory
>>>> usage because I don't want to hear about virtual memory
>>>> until it is proven beyond all possible doubt that my
>>>> process does not (and can not be made to be) resident
>>>> in actual RAM all the time.
>>>
>>> From my understanding of your "test" (simply viewing the
>>> number of page faults reported by task manager) you can
>>> only conclude that there have not been any significant
>>> page faults since your application loaded the data, not
>>> that your application and data have remined in main
>>> memory. If you actually attempt to access all of your
>>> code and data and there are no page faults, I would be
>>> very surprised. In fact, knowing what I do about the
>>> cache management in Windows 7, I'm very surprised that
>>> you are not seeing any page faults at all unless you
>>> have disabled the caching service.
>>>
>>>>
>>>> Since a test showed that my process did remain in
>>>> actual RAM for at least twelve hours,
>>>
>>> No. That is not what your simple test showed unless your
>>> actual test differed significantly from what you
>>> expressed here.
>>>
>>> -Pete
>>>
>> (1) I loaded my process
>> (2) I loaded my process data
>> (3) I waited twelve hours
>> (4) I executed my process using its loaded data, and
>> there were no page faults reported by the process monitor
>> (5) Therefore my process data remained entirely resident
>> in actual RAM for at least twelve hours.
>
> What program is "process monitor"? Are you referring to
> the Sysinternals tool or are you referring to Task Manager
> or Resource Monitor?
>
> -Pete
>
>

Task Manager
Process Tab
View
Select Columns
Page Faults


From: Hector Santos on
Example usage of the class below, I added an Add() override to make it
easier to add elements for the specific TSlaveData fields:

#include <windows.h>
#include <conio.h>
#include <list>
#include <string>
#include <iostream>

using namespace std;

const DWORD MAX_JOBS = 10;

typedef struct _tagTSlaveData {
DWORD jid; // job number
char szUser[256];
char szPwd[256];
char szHost[256];
} TSlaveData;

class CBucket : public std::list<TSlaveData>
{
public:
CBucket() { InitializeCriticalSection(&cs); }
~CBucket() { DeleteCriticalSection(&cs); }

void Add( const TSlaveData &o )
{
EnterCriticalSection(&cs);
insert(end(), o );
LeaveCriticalSection(&cs);
}

void Add(const DWORD jid,
const char *user,
const char *pwd,
const char *host)
{
TSlaveData sd = {0};
sd.jid = jid;
strncpy(sd.szUser,user,sizeof(sd.szUser));
strncpy(sd.szPwd,pwd,sizeof(sd.szPwd));
strncpy(sd.szHost,host,sizeof(sd.szHost));
Add(sd);
}

BOOL Fetch(TSlaveData &o)
{
EnterCriticalSection(&cs);
BOOL res = !empty();
if (res) {
o = front();
pop_front();
}
LeaveCriticalSection(&cs);
return res;
}
private:
CRITICAL_SECTION cs;
} Bucket;


void FillBucket()
{
for (int i = 0; i < MAX_JOBS; i++)
{
Bucket.Add(i,"user","password", "host");
}
}

//----------------------------------------------------------------
// Main Thread
//----------------------------------------------------------------

int main(char argc, char *argv[])
{

FillBucket();
printf("Bucket Size: %d\n",Bucket.size());
TSlaveData o = {0};
while (Bucket.Fetch(o)) {
printf("%3d | %s\n",o.jid, o.szUser);
}
return 0;
}

Your mongoose, OCR thingie, mongoose will Bucket.Add() and each
spawned OCR thread will do a Bucket.Fetch().

Do it right, it and ROCKS!

--
HLS

Hector Santos wrote:

> Peter Olcott wrote:
>
>> I still think that the FIFO queue is a good idea. Now I will have
>> multiple requests and on multi-core machines multiple servers.
>
>
> IMO, it just that its an odd approach to load balancing. You are
> integrating software components, like a web server with an multi-thread
> ready listening server and you are hampering it with a single thread
> only FIFO queuing. It introduces other design considerations. Namely,
> you will need to consider a store and forward concept for your request
> and delayed responses. But if your request processing is very fast,
> maybe you don't need to worry about it.
>
> In practice the "FIFO" would be at the socket level or listening level
> with concepts dealing with load balancing by restricting and balancing
> your connection with worker pools or simply letting it to wait knowing
> that processing won't be too long. Some servers have guidelines for
> waiting limits. For the WEB, I am not recall coming across any
> specific guideline other than a practical one per implementation. The
> point is you don't want the customers waiting too long - but what is
> "too long."
>
>> What is your best suggestion for how I can implement the FIFO queue?
>> (1) I want it to be very fast
>> (2) I want it to be portable across Unix / Linux / Windows, and maybe
>> even Mac OS X
>> (3) I want it to be as robust and fault tolerant as possible.
>
>
> Any good collection class will do as long as you wrap it with
> synchronization. Example:
>
>
> typedef struct _tagTSlaveData {
> ... data per request.........
> } TSlaveData;
>
> class CBucket : public std::list<TSlaveData>
> {
> public:
> CBucket() { InitializeCriticalSection(&cs); }
> ~CBucket() { DeleteCriticalSection(&cs); }
>
> void Add( const TSlaveData &o )
> {
> EnterCriticalSection(&cs);
> insert(end(), o );
> LeaveCriticalSection(&cs);
> }
>
> BOOL Fetch(TSlaveData &o)
> {
> EnterCriticalSection(&cs);
> BOOL res = !empty();
> if (res) {
> o = front();
> pop_front();
> }
> LeaveCriticalSection(&cs);
> return res;
> }
> private:
> CRITICAL_SECTION cs;
> } Bucket;
>
>
>
>



--
HLS