From: Oliver Regenfelder on
Hello,

Joseph M. Newcomer wrote:
> THere was some trash talk about std::vector not being
> able to handle more than MAX_INT elements
> , but as I read std::vector, it uses the STL type
> size_type, which is 64 bits in the 64-bit compiler. Even the most superficial study of
> the documentation reveals in under 30 seconds that
> typedef size_t size_type;
>
> is the definition of size_type (look at allocator::size_type)
>
> so you are right on target about the failure to use the correct data type for the size; it
> should be size_type (which is size_t, which is 64 bits long in the 64-bit world!)
> joe

Well he used arrays allocated with new, and new takes std::size_t in its
'raw form' for the size.

As such IMHO size_t is correct when indexing raw C/C++ arrays.

int *blah = new int[100];

for(size_t index = 0; index < ...)


But if you want to do 100% C++, then this would change to
the following when using std::vector

std::vector<int> blah(100);
for(std::vector<int>::size_type index = 0; index < ...)

Thanks for pointing that out.

Best regards,

Oliver
From: Oliver Regenfelder on
Hello,

Peter Olcott wrote:
>> Then you
>> can use memory-mapped files, and share this massive
>> footprint across multiple processes,
>> so although you might have 1.5GB in each process, it is
>> the SAME 1.5GB because every
>> process SHARES that same data with every other process.
>>
>> Seriously, this is one of the exercises in my Systems
>> Programming course; we do it
>> Thursday afternoon.
>> joe
>
> But all that this does is make page faults quicker right?
> Any page faults at can only degrade my performance.

It also reduces overall memory usage as stated earlier.

Say you have 4 processes (not threads!) then each of the 4
processes has its own address space. So if you need the same
data in each process the simple thing is
1) Generate an array or something
2) Load precalculated data from file

The problem is, that this way, each process has his independend
copy of the data and you would use 4*data_size memory.

Now if you use memory mapped files, then each process would do
the following:
1) Memory map the file[1].

If you do this in each process then the OS will do its magic and
recognize that the same file is mapped into memory 4 times and only
keep one physical copy of the file in memory. Thus you only use
1*data_size of memory.

[1]: You somewhere mentioned something about OS independence so
have a look at www.boost.org. The interprocess library contains
the memory mapping code. You might also want to consider it for
your threading in case the software shall not only run on Windows.


Best regards,

Oliver

From: Hector Santos on
Peter Olcott wrote:

>>> Ah so this is the code that you were suggesting?
>>> I won't be able to implement multi-threading until volume
>>> grows out of what a single core processor can accomplish.
>>> I was simply going to use MySQL for the inter-process
>>> communication, building and maintaining my FIFO queue.
>> ****
>> Well, I can think of worse ways. For example, writing the
>> data to a floppy disk. Or
>> punching it to paper tape and asking the user to re-insert
>> the paper tape. MySQL for
>> interprocess communication? Get serious!
>
> Can you think of any other portable way that this can be
> done?
> I would estimate that MySQL would actually keep the FIFO
> queue resident in RAM cache.

Huh?

Will MySQL will keep a FIFO queue resident?

WOW! This is unbelievable.

Do you know what MySQL is? Or even a FIFO queue?

Honestly, do you really understand what MySQL or any SQL engine is?

And you are worry about performing?

Here is you functions for a FIFO:

AddFifo()
{
1 - connect to SQL server
2 - prepare sql statement

sql = "insert into table values (that, this, that, ...)"

3 - execute sql
4 - close
}

GetFifo()
{
1 - connect to SQL server

2 - prepare sql statement, get last one

sql = "select * from table limit 1, -1"

3 - execute sql

4 - Fetch the Record, if any

5 - get id for record
6 - prepare DELETE sql statement

sql = "delete from table where id = whatever"

7 - execute sql

8 - close
}

You are crazy out of this world and nuts if you expect this to be an
efficient "FIFO" concept.


--
HLS
From: Hector Santos on
Peter Olcott wrote:

> "Joseph M. Newcomer" <newcomer(a)flounder.com> wrote in

>> A multithreaded FIFO queue would make more sense; less
>> chance of priority inversion
>> effects. An I/O Completion Port makes a wonderful
>> multithreaded FIFO queue with
>> practically no effort!
>> joe
>
> How ?

Well, there will be programming effort with IOCP, his main point is
that you don't understand enough programming ideas under Windows that
you behaving here like a new kid who just discovered the toaster oven
and you don't want follow others cooks or read a cook book in the many
ways to use the toaster oven.

Look, your SOCKET is a FIFO. Its inherent. So when connects come in,
they are queued, designed for multi-threaded worker pool concepts to
handle the incoming request.

So what you do?

You don't use threads, and you slow it down by adding ANOTHER FIFO
queue to run the OCR per request one at a time.

But now you think, I got a MONSTER machine. Its easier for me to use
create a EXE with MONGOOSE+OCR and run it 4 times, assuming it will be
1 EXE per multi-core processor.

Now you screwed up MONGOOSE, now you have 4 competition SOCKET servers
on PORT 80 and thats not possible on a single machine. You can only
have 1 PORT 80 HTTP SERVICE - NOT FOUR!!

So you automatically BLOCKED three of the EXE running in memory, they
will NEVER see a socket connection.

But you say ok, "then I have no choice but to use FOUR machines or
even TWO for now."

Now you need a LOAD BALANCER or just use simple round robin logic in
DNS records by adding two A records into your ZONE for the same domain.

myservice.com
1.2.3.4 // machine 1
1.2.3.5 // machine 2


Under DNS, it will round robin the IP to connect to. No real concept
for load balancing.

The only load balancing concept you have is a FIFO at the OCR part.

So it all goes back to the Toaster Oven. You don't know how to use
the simply toaster oven and you don't want to listen to anyone that
what YOU are doing is freaking crazy and you don't even realize that
YOU can't do what you want anyway with EXE per request when a socket
is involved without going into the need to add a LOAD BALANCING for
multiple machines.

Really Peter, it skipped my mind that YOU can't run multiple EXE when
you have a socket service embedded into it. But I said it in so many
other ways, you are trying to add a multi-threaded web server into 1
single thread OCR process, when you should be adding a multi-thread
OCR process into a multi-thread service.

It just dawned on me, that YOU can not do multiple MONGOOSE+OCR EXE
processors - mongoose is restricted to a SINGLE PORT and you can't not
have multiple services on the same port on the same machine.

--
HLS
From: Hector Santos on
Joseph M. Newcomer wrote:

> See below...
>
> On Tue, 23 Mar 2010 14:11:28 -0500, "Peter Olcott" <NoSpam(a)OCR4Screen.com> wrote:
>
>> Ah so this is the code that you were suggesting?
>> I won't be able to implement multi-threading until volume
>> grows out of what a single core processor can accomplish.
>> I was simply going to use MySQL for the inter-process
>> communication, building and maintaining my FIFO queue.
> ****
> Well, I can think of worse ways. For example, writing the data to a floppy disk. Or
> punching it to paper tape and asking the user to re-insert the paper tape. MySQL for
> interprocess communication? Get serious!

Crazy huh? :) And all this talk about performance, worrying about chip
caches, and throws in this elephant?

Even though, given his limitations by his own design, he picked the
wrong software - he wants SQLITE3! It fits in perfectly with a single
ACCESSOR queuing concept. :)

--
HLS