From: Hector Santos on
Yeah, its crazy Joe, I think he is more fascinated at his hardware and
what it can do with it, than the Windows OS and want he can do with
it. I think he just wants someone to write it for him.

BTW, I had a bug in the GetFifo(). I had a LIFO there :) Step 2 in
GetFifo() should be changed to:

2 - prepare sql statement, get first one

sql = "select * from table limit 1"


--
HLS

Joseph M. Newcomer wrote:

> I decided it was easier just to say "you are a complete twit" because he obviously doesn't
> understand reality. But we KNOW that MySQL is going to keep this queue "resident"
> because, well, because it sounds cool. I guess. You and I both know what is going to
> happen, but he has this unwavering faith in "resident in memory", not understanding what
> is meant by "transacted database" or that when that transactional update is comitted that
> data will be FORCED out to the disk. But why should reality be allowed to deter hiim from
> his treasured design? Particularly when the wishful-thinking approach to performance is
> running full-on?
>
> We should all chip in and buy him a pack of Tarot cards so he can use them for design and
> stop bothering us. Or maybe an I Ching set for doing performance measurement (so he can
> refer to a "world renowned" technique for deriving his information).



> joe
>
> On Thu, 25 Mar 2010 07:22:21 -0400, Hector Santos <sant9442(a)nospam.gmail.com> wrote:
>
>> Peter Olcott wrote:
>>
>>>>> Ah so this is the code that you were suggesting?
>>>>> I won't be able to implement multi-threading until volume
>>>>> grows out of what a single core processor can accomplish.
>>>>> I was simply going to use MySQL for the inter-process
>>>>> communication, building and maintaining my FIFO queue.
>>>> ****
>>>> Well, I can think of worse ways. For example, writing the
>>>> data to a floppy disk. Or
>>>> punching it to paper tape and asking the user to re-insert
>>>> the paper tape. MySQL for
>>>> interprocess communication? Get serious!
>>> Can you think of any other portable way that this can be
>>> done?
>>> I would estimate that MySQL would actually keep the FIFO
>>> queue resident in RAM cache.
>> Huh?
>>
>> Will MySQL will keep a FIFO queue resident?
>>
>> WOW! This is unbelievable.
>>
>> Do you know what MySQL is? Or even a FIFO queue?
>>
>> Honestly, do you really understand what MySQL or any SQL engine is?
>>
>> And you are worry about performing?
>>
>> Here is you functions for a FIFO:
>>
>> AddFifo()
>> {
>> 1 - connect to SQL server
>> 2 - prepare sql statement
>>
>> sql = "insert into table values (that, this, that, ...)"
>>
>> 3 - execute sql
>> 4 - close
>> }
>>
>> GetFifo()
>> {
>> 1 - connect to SQL server
>>
>> 2 - prepare sql statement, get last one
>>
>> sql = "select * from table limit 1, -1"
>>
>> 3 - execute sql
>>
>> 4 - Fetch the Record, if any
>>
>> 5 - get id for record
>> 6 - prepare DELETE sql statement
>>
>> sql = "delete from table where id = whatever"
>>
>> 7 - execute sql
>>
>> 8 - close
>> }
>>
>> You are crazy out of this world and nuts if you expect this to be an
>> efficient "FIFO" concept.
> Joseph M. Newcomer [MVP]
> email: newcomer(a)flounder.com
> Web: http://www.flounder.com
> MVP Tips: http://www.flounder.com/mvp_tips.htm



--
HLS
From: Hector Santos on
Joseph M. Newcomer wrote:


> Or they were testing the limits of your credulity. Reminds me of the Calvin & Hobbs
> cartoon: The family is in the card. Calvin: "Dad, how do they determine the weight limit
> of a bridge?" Dad: "They run bigger and bigger trucks over it until it collapses, then
> they rebuild it exactly and post the weight limit"


I like that one. :)

> Can you explain why you would accept, without question, such a patently absurd suggestion
> from one newsgroup while ignoring all the good advice you've been getting in this one?

Well, he probably didn't ask the right question or they haven't had
the time yet to pry it out of him.

--
HLS
From: Joseph M. Newcomer on
See below...
On Wed, 24 Mar 2010 23:09:34 -0500, "Peter Olcott" <NoSpam(a)OCR4Screen.com> wrote:

>
>"Joseph M. Newcomer" <newcomer(a)flounder.com> wrote in
>message news:ohnlq5924q5tisi5vhkl46k8innq5vt7u0(a)4ax.com...
>> See below...
>> On Tue, 23 Mar 2010 09:16:34 -0500, "Peter Olcott"
>> <NoSpam(a)OCR4Screen.com> wrote:
>>
>>>
>>>"Hector Santos" <sant9442(a)nospam.gmail.com> wrote in
>>>message
>>>news:u8xPnamyKHA.2552(a)TK2MSFTNGP04.phx.gbl...
>>>> Hector Santos wrote:
>>>>
>>>>> PS: I noticed the rand() % size is too short, rand() is
>>>>> limited to RAND_MAX which is 32K. Change that to:
>>>>>
>>>>> (rand()*rand()) % size
>>>>>
>>>>> to get random range from 0 to size-1. I think thats
>>>>> right, maybe Joe can give us a good random generator
>>>>> here, but the above does seem to provide a practical
>>>>> decent randomness for this task.
>>>>
>>>> Peter, using the above RNG seems to be a better test
>>>> since
>>>> it hits a wider spectrum. With the earlier one, it was
>>>> only hitting ranges upto 32K.
>>>>
>>>> I also notice when the 32K RNG was used, a dynamic array
>>>> was 1 to 6 faster than using std::vector. But when
>>>> using
>>>> the above RNG, they were both about the same. That is
>>>> interesting.
>>>>
>>>> --
>>>> HLS
>>>
>>>I made this adaptation and it slowed down by about 500%, a
>>>much smaller cache hit ratio. It still scaled up to four
>>>cores with 1.5 GB each, and four concurrent processes only
>>>took about 50% more than a single process.
>>>
>> ****
>> If you use multiple threads in a single process, or
>> multiple processes with a shared
>> memory segment implemented by a Memory Mapped File, you
>> will reduce the footprint since
>> either all threads (in one process) share the same memory
>> footprint, or the multiple
>> processes share largely the same memory footprint.
>
>Yup. so if its fast enough with multiple processes it will
>sure be fast enough as multiple threads, wouldn't it be?
>
****
Note that I was assuming you were going to have a single Web server and be running
multiple child processes. As Hector has pointed out, if you have embedded your service
within the Web server, this cannot possibly work, for all the reasons he has pointed out.
Sadly, you have refused to understand even the most basic socket programming concepts to
realize that you cannot run multiple port 80 servers. And you still think that running
multiple threads in a single process is such a totally weird idea you are unwilling to try
it. And your multiple-process approach, even if you run them as child processes of your
Web server, does not scale up with cores; it only scales with available memory (and we
gave you the solution for this, memory-mapped files, but you refused to pay attention
because your Tarot Cards said that this would not work). You insist that all our
solutions cannot possibly work. Even though we've been building real systems using these
techniques for many years, decades, even. Be careful, or the Department of Defense will
declare your brain a National Resource (denser than depleted uranium).

You could have obtained all the data a week ago if you had just written a multithreaded
version and tested it. Instead, you wasted everyone's time telling us we were totally
wedged. Then, after we were proven right, you now insist that some completely meaningless
number from some indeterminate algorithm has set the performance mark (12MB/sec) which you
are not achieving. Go ahead,, build systems that cannot possibly do what you need! You
clearly know better than all of us what is going to heppen, because of your faith, use of
Tarot Cards or Divine Revelation of how things work and what performance is going to be.
Have fun.
joe
Joseph M. Newcomer [MVP]
email: newcomer(a)flounder.com
Web: http://www.flounder.com
MVP Tips: http://www.flounder.com/mvp_tips.htm
From: Joseph M. Newcomer on
See below...
On Wed, 24 Mar 2010 23:59:32 -0500, "Peter Olcott" <NoSpam(a)OCR4Screen.com> wrote:

>
>"Joseph M. Newcomer" <newcomer(a)flounder.com> wrote in
>message news:1bqlq5lss7pnt4lo669n2hg6peeuj3qlfm(a)4ax.com...
>> OK if you have designed your software "correctly", put
>> that DFA into a binary file which
>> can be mapped into any process at any aribitrary address
>> and your software continues to
>> work correctly. If you can't make this trivial change,
>> then you have some serious design
>> issues to address.
>> joe
>>
>
>It is the unnecessary learning curve
***
"I'm clueless, and I want to stay that way!" That has become evident in your responses.
***
>that I don't want to
>deal with. It one does not carefully pair the unnecessary
>learning curve from the tree of possibilities one would
>spend all of their time learning about doing things and
>never actually get around to doing them.
***
Except, as I keep pointing out, it isn't an UNNECESSARY learning curve; it is essential to
learn to do these things to get the performance you want! "I don't need to learn to read,
my grandpa, he was a succesful businessman and he was functionally illiterate all his
life, and I should be able to succeed without needing to learn to read!". (I actually
know of someone, a family member, who ran a business but couldn't read, but it was two
generations back. It doesn't work today).

You think all this doesn't matter. You want 500ms respons time, there is a PRICE TO PAY
to get that, and it is in learning how to exploit all the features of the OS that can
reduce your overheads to manageable levels. You want the performance, you pay the price!
TANSTAAFL (There Ain't No Such Thing As A Free Lunch).
****
>
>That said HTTP the Definitive guide is providing most of the
>info that I need. Things such as pipelined connections will
>help me to get as close as possible to my 500 ms goal.
****
OH. I though we told you some of this weeks ago, but now that you are doing something
about the learning curve and found something that confirms everything we told you, you
suddenly tell us we were right all along! (And didn't you know that ALL TCP/IP
connections are pipelined? Didn't I tell you about Sliding Window Protocol? I'm SURE I
said that somewhere....and if you'd followed up and read about it, you would have known it
was a pipelining protocol!)
joe
****

Joseph M. Newcomer [MVP]
email: newcomer(a)flounder.com
Web: http://www.flounder.com
MVP Tips: http://www.flounder.com/mvp_tips.htm
From: Peter Olcott on

"Ismo Salonen" <ismo(a)codeit.fi> wrote in message
news:OjTbBi$yKHA.5332(a)TK2MSFTNGP02.phx.gbl...
> Peter Olcott wrote:
>> "Joseph M. Newcomer" <newcomer(a)flounder.com> wrote in
>> message
>> news:rdqlq5dv2u8bh308se0td53rk7lqmv0bki(a)4ax.com...
>>> Make sure the addresses are completely independent of
>>> where the vector appears in memory.
>>>
>>> Given you have re-implemented std::vector (presumably as
>>> peter::vector) and you have done
>>> all the good engineering you claim, this shouldn't take
>>> very much time at all. Then you
>>> can use memory-mapped files, and share this massive
>>> footprint across multiple processes,
>>> so although you might have 1.5GB in each process, it is
>>> the SAME 1.5GB because every
>>> process SHARES that same data with every other process.
>>>
>>> Seriously, this is one of the exercises in my Systems
>>> Programming course; we do it
>>> Thursday afternoon.
>>> joe
>>
>> But all that this does is make page faults quicker right?
>> Any page faults at can only degrade my performance.
>>
>
> just my two cents :
>
> memory mapped files are paged in to process memory when
> the page is 1st time referenced (read or written). This is
> pagefault mechanism.
> It is the most efficient way accessing data. Reading
> memory by ReadFile() or others in that family are slower
> (well ReadFile directly you buffers would result quite
> similar performance bause it use the paging mechanism).
> The pagefault mechanism must be quite optimized as it is
> the basis for modern operating systems ( windows, linux
> etc ). The inner workings are quite delicate and I highly
> suspect you can no way outperform it.
>
> It seems that you have plausible product idea but you lack
> the knowledge
> how operating system works. Have you read the articles
> others have

One thing that I know is that a process with page faults is
slower than a process without page faults everything else
being equal.

It continues to work (in practice) the way that I need it to
work, and I have never seen it work according to Joe's
theories. Whenever there is plenty of excess RAM (such as 4
GB more than anything needs) there are no page-faults in my
process. I even stressed this out a lot and had four
processes taking 1.5 GB each (of my 8 GB) and still zero
page faults in any of the four processes.

> referred to ? you should read the Windows Internals (5th
> edition) by Russinovich et al. to get the real data about
> how things work.
>
> (I've been programmer for living less time than Joe but
> still about 28 years, you should really trust him as he
> knows what he is talking about. I've also done more than
> just windows/linux/unix/vms/ias/rsx/msdos/OS9 stuff , also
> realtime embedded where 1ms was long time).
>
> Probably you could hire someone to help out the
> implementation stage ?
> Surely if the product is so good as you have described
> then using some money to get on to market faster is
> usually considered good idea.
>
> br
> ismo