From: Joseph M. Newcomer on
See below...
On Tue, 30 Mar 2010 20:09:27 -0400, Hector Santos <sant9442(a)> wrote:

>Peter Olcott wrote:
>>> When you stop talking about the need to allocate
>>> contiguous physical memory, recognize ....
>> The official C++ standard defines a std::vector as
>> contiguous memory are they wrong too?
>But its still virtualized contiguous memory! It still uses the HEAP
>MANAGER which means you are 100% under the control of virtual memory
It is very tricky. For example, VirtualXXXX functions work on 4k page boundaries and 64K
allocation granularity, and HepXXXX and malloc/new work on smaller units. So if you want
to lock your new/malloc space down, you will have to lock down at least one entire page
and possibly two. Of course, if you use a MMF, it is inherently mapped at a page (and
probably allocation granularity) boundary, so using VirtualAlloc to mark the pages or
VirtualLock to lock them should be easy. Of course, you get the problem that a MMF is
mapped into different virtual addresses in different processes, so the "addresses" used
must be process-independent (and therefore, something like __based to be process-relative
offsets) becomes an issue, but since he's not anywhere close to comprehending this, let's
not confuse the issue by poihting out the obvious.
>While you can attempt to lock this virtual memory and I don't think
>you can mix the HeapXXXXX functions with the VirtualXXXXX functions,
>but even if you can, you are still LIMITED to locking a certain system
>set "view" size of this total allocation - thats called your WORKING
>SET. From my readings, the system maximum working set per process is:
> 32 bit: 1.5 GB
> 64 bit; 8 GB
I think these advanced concepts are lost on him because they are not the "essence" of
virtual memory. Of course, paging isn't, either, but hey, your thick sludge is his
essence. Working set is a metaconcept of virtual memory+paging. Now we are REALLY far
away from the essence of virtual memory! Never mind that such concepts are ESSENTIAL to
good system performance!

Back in the days of TSS/360 on the 360.67, you could LPSW (Load Processor Status Word)
which had the Instruction Pointer, and get up to 6 page faults before that instruction
completed. It worked like this:

1. The page that held the instruction wasn't loaded (was "mapped out").
2. The instruction spanned two pages, and the second page was mapped out
3. The first source data value of an MVS (move string) instruction was on a missing page
4. The target location of the MVS was on a missing page
5. The buffer being moved crossed a page boundary to a missing page
6. The target buffer crossed a page boundary to a missing page

MVS could only move 256 bytes, maximum, so you couldn't cross multiple page boundaries,
and there were no indirect addresses that required additional fetches.

This meant MASSIVE overhead. Peter Denning did his working set work on another 360/67 (I
think at Purdue), but by using the working set, it was pretty much guaranteed that when
you started a thread (or a process, since we were still in the Bad Old Days when every
process had exactly ONE thread!) that all the pages would be in memory, thus eliminating
the flurry of page faults on process (thread) dispatch. Note that if the existing
process/thread had been suspended just after a CALL instruction (BALR, Branch and Link
Register, which put the return address in a register, and by tradition R14 was the return
addressl "ret" was "br 0(14), or branch to the location 0 bytes off where R14 pointed),
and the target of the call was not a page in the working set, and the data buffers were
not in the working set, you got worst-case behavior, as described above, but the nature of
real programs meant that this happened rarely, and most of the time it meant 0 page faults
in the process.

Statisticaslly, working set wins big. Part of the trick that makes this work is that
although there really are "page faults" they are handled in the background, and it factors
out the scheduler from the overhead (the TSS/360 scheduler consumed 37% of the CPU time in
real operation). It wasn't the six page faults that were the killer; it was the six
invocations of the scheduler.
>If this "VIEW SIZE" is too small for you, you need to use AWE.
>What you are not understanding is the Windows will intelligently
>manage the memory fragments that you need and mostly use and your
>Windows 7 does this better. Think of your disk. At the application
>level, the file data looks contiguous data to you. It appear
>serialized. But in reality, it could be fragments into clusters on
>the disk. Same idea with Virtual Memory.
Ohh, you just spoiled it. Disk fragmentation is another performance-killer, particularly
for database systems (do we really want to talk about ISAM overflow quotas here? And
their impact on performance?) and transacted databases in particular tend to fragment
disks. But the MySQL-as-queue model requires a transacted database interaction,
especially with the recently-introduced fault-tolerance being non-negotiable.
>Overall, you keep thinking that the system is too slow for you, when
>the reality is you are too slow for it. But you are driving 25 mph on
>at 60 mph highway, not only are you slower, you are taking up two
>lanes slowing everyone else up.
Nice analogy.
Joseph M. Newcomer [MVP]
email: newcomer(a)
MVP Tips:
From: Joseph M. Newcomer on
See below...
On Tue, 30 Mar 2010 13:39:29 -0500, "Peter Olcott" <NoSpam(a)> wrote:


>Remember how we met? (In a 2005 email, about this article
>of yours)
>That said, there is almost no reliable way to send mouse
>clicks to an application; lots of
>people have tried this and failed. You are in the model of
>"I want to write a Windows
>scripting language", and it is nearly impossible to get
>right. I've been involved with at
>least two projects that failed miserably
Hmm. I just re-read that article, and I think it is all still true today. And both those
projects DID fail, so that is completely true. And there is still no really reliable way
to simulate mouse clicks in an application. So what is the point you are trying to make
>> ****
>> Joseph M. Newcomer [MVP]
>> email: newcomer(a)
>> Web:
>> MVP Tips:
Joseph M. Newcomer [MVP]
email: newcomer(a)
MVP Tips:
From: Joseph M. Newcomer on
See below...
On Tue, 30 Mar 2010 17:42:28 -0400, Hector Santos <sant9442(a)> wrote:

>Joseph M. Newcomer wrote:
>>> Sure I have you just haven't gotten to it yet, email. I
>>> haven't got any more time for this discussion. I am
>>> convinced that I will get it right. I will get back to you
>>> on this when I am pretty sure that I have it right and you
>>> can double check my final design.
>> ****
>> Oh, in your fantasy world, email is a reliable delivery mechanism? It isn't out here in
>> the real world. And did you read Hector's comments on email acknowledgement (and he's far
>> more an expert on email than I am; all I know is that there are serious reliability issues
>> with email; the evidence is that people send me email which I never receive, and which
>> never even arrive at my ISP, although there is a record that they were sent).
>I am an active member and contributor of the SMTP working
>group/mailing list. I'm acknowledged in the RFC 5321 SMTP specification.
>including acknowledged in the recently RFC 5598 endorsed creation of
>the "Internet Mail Architecture," document by one of the fathers of
>electronic mail, David Crocker:
>I write and market a very highly integrated modern mail system and
>been doing mail systems since the 80s. I think I know maybe *a
>little* about this. :)
Which is why I said your credentials were better than mine in this area!
>> Apparently, you once read an article on email, and now believe you understand it
>> completely. Even your concept of a "veriiable email address" shows a degree of
>> cluenessness that is, alas, unsurprising.
>Abstract level thinking :) which is ok, if he had the integration right.
>For example, for all this fault tolerant talk, if the machine dies ,
>how can it even send email? That means he needs an integrated
>redundancy that deals email. His database has to be on a remote
>server as well which I'm sure he didn't take into account
If he uses a transacted database, upon recovery, there can be a process that scans the
database and detects a transaction that was "in flight" but "uncompleted" and send the
email then. Of course, this can take the 500ms limit to hours or days, but what's a
little flexibility with the realtime window, given the mooshiness of the rigid and
non-negotiable requirements.

Of course, this DOES presume that "recovery" is a possibility; if the hard drive fried,
there IS no "recovery"

Oh, wait a moment: he's using RAID 1. And we know that it has ZERO performance cost over
a regular hard drive, because he wouldn't choose to use it had any performance penalty.
Unless throwing buzzwords around has zero cost. And this presumes that someone decides
that it not simply easier to replace the failed computer without trying to do any
recovery. Of course, this would be part of his contract with the ISP, to guarantee that
no matter how cost-ineffective it is for the ISP, the ISP will have to try to do recovery.
And the commodity ISP will agree to this without charging extra. And all the little pink
puppies will live happily ever after, the evil sorceror will be conquered, and the Ring
will be returned to the volcanoes of Mordor. Sorry, I get carried away. I sometimes lose
my grasp of what is fantasy and what is reality.

(And in December, a component failure in the SCSI backplane of my server corrupted two of
my three RAID-5 drives. So much for redundancy. Fortunately, my Business Continuity
clause in my insurance policy meant that my business insurance covered the US$6,000
recovery cost).
>He hasn't come close to realizing what are all the integration and
>communications issues are. He discovers another piece on a daily basis
>but just fails to see how things fit - which is OK, not everyone can
>be a good integrator, but Joesph, Mary, Charlie, Peter, John, Sally
>and David - this guy really wants to act like a maroon and as Delgado
>said, he really thinks he is serious!
Real systems are astoundingly complex. Which is why, when I had to do massive complex
integration, I got experts to help me. Key to this is that I LISTENED CAREFULLY to what
they were telling me. I outsource my site maintenance, because I don't want to be
bothered with concepts such as "roaming profiles". I now know what they are, but I still
don't want to know the details. And this costs me real dollars to get this support.
Several hundred a month.

I had to build a computer room in 1981 (for computer delivery in March, 1982, of a huge
mainframe computer). I had to learn HVAC, humidity control, fire systems, power
distribution (did you know that most industrial buildings have 240-volt delta 3-phase
systems, and mainframes of that era required 208-volt Y 3-phase power? I didn't when I
started, but a $20,000 delta-to-Y converter with power conditioning and regulation solved
the problem...) I spent close to $100,000 building that room, and there was no room for
error. It had to be perfect. You better believe I got REAL experts involved early! HUGE
sets of interacting systems (how does the HVAC system shut down the computer when it
fails? I BUILT the relay box that activated the shutdown! I had to, because the signals
produced by the HVAC system were incompatible with the inputs required by the power
system, so I had to DESIGN it as well. And another to handle the signals from the fire
supression system (Halon), because its signals were incompatible with the power
conditioning system) Later people dealt with interfacing the Internet to our machine, a
nontrivial action in those days (1982), that involved boxes that cost $70,000 each (we
needed two), expensive and special lines from the predecessor of Verizon (Bell of PA), and
integrating problematic software into the operating system. By that time, I'd hired
experts to do this for me. And I listened to them, and they had to convince me that they
were about to do the right thing, and that all alternatives had been explored, and we had
real budget numbers and schedules with milestones. I could do this because for nearly a
year I had been the person having to do that. (Operating systems in those days did NOT
come with TCP/IP built-in! In fact, TCP/IP was the New Kid On The Block, just having been
released, so even FINDING the appropriate software was a challenge)

I see none of this "due diligence" being manifested in these conversations. Not that we
aren't trying to tell him.

The almost-daily morphing of the requirements is seriously scary. The fact that new
requirements keep getting added that are incompatible with previous requirements is not a
good indicator. The fact that every problem is solvable with an apparently zero-cost
buzzword (which is usually used incorrectly) is not a good indication.

But boiled down to its essence, it is simple: this system shall be perfect in all ways.
And this perfection will require zero effort, and incur zero performance cost, and zero
dollar cost. And zero support cost.

Corollary: this system will be such a finacial success (at $0.10/transaction) that he will
soon have unlimited funds to hire people to solve all the very real problems he has been
ignoring. [He even said so!]

One thing I learned being self-employed: every expense turns out to be expressible
precisely in hours of my time. So to hire me for 1 hour will require 1,000 transactions.
And I'm cheap (I've been told I consistently underprice myself). A full-time person
requires about 2,500 transactions for every hour they work, so to support a full-time
person requires 1370 transactions every day, 57 transactions/hour EVERY HOUR OF EVERY DAY
just to break even. That's a little under 1/minute

Oh, I forgot. I've also had to develop business plans, work with people developing
marketing plans, and so on. I guess this has given me unfortunate biases about how to
deal with costs. It must be wonderful to go through life free of all biases. Then
ANYTHING is possible!

Note that by handling an average of 1/minute 24/7, this produces zero profit, does not
cover the cost of the server, just enough to cover the base salary ($50,000) of one
full-time person. I have not taken into account other costs, such as payroll taxes,
health insurance, and other overheads. Half a million transactions per year, just to
cover salary. One ring to rule them all...and the Three Little Kittens found their
mittens, and everyone lived Happily Ever After. Whoops, there I go, losing my grip on
reality again...

Joseph M. Newcomer [MVP]
email: newcomer(a)
MVP Tips:
From: Joseph M. Newcomer on
See below...
On Tue, 30 Mar 2010 12:45:40 -0500, "Peter Olcott" <NoSpam(a)> wrote:


>> page faults would only occur during loading into the first
>> process,
>(1) I never saw that you ever said anything like this.
Hector (I think it was Hector) gave a citation to a sequence of messages from December,
which I went back and read, and sure enough, there is a mention of VirtualLock in it!
>(2) Is it really true, even in the case where pages loaded
>exceeds physical RAM?
No. VirtualLock will fail then, and you will have to lock down only what it allows you to
lock down. The rest will page fault. That's what happens when the virtual address space
oversubscribes the physical address space. Or is this not obvious from the previous
discussions? I would have thought it was.

Of course, if you have an algorithm for storing ten pounds of sugar in a five-pound bag,
feel free to use it.

But I believe you insisted that you were going to try to store ten pounds of sugar in a
twenty-pound bag (having more physical memory than virtual memory, by at least a factor of
2) so I don't see why you think this is an issue.
>> no different than what you are already experiencing. Oh
>> well, I'm sorry reality
>> doesn't conform to your fantasies, but I can tell you
>> repeatedly you are wrong. Your
>> insistence that you are right does not make you right. It
>> only proves you don't
>> understand what anyone is telling you.
>> joe
>> ****
>> Joseph M. Newcomer [MVP]
>> email: newcomer(a)
>> Web:
>> MVP Tips:
Joseph M. Newcomer [MVP]
email: newcomer(a)
MVP Tips:
From: Peter Olcott on

"Joseph M. Newcomer" <newcomer(a)> wrote in
message news:r0j6r519nth51antn36r4td6pte2eorv8t(a)
> See below...
> On Tue, 30 Mar 2010 17:50:27 -0500, "Peter Olcott"
> <NoSpam(a)> wrote:
> *
>>> A refusal to learn the proper vocabulary, based on a
>>> view
>>> that it is not necessary to use
>>> the technical language correctly, also demonstrates a
>>> lack
>>> of professionalism (a fact).
>>> When you stop talking about the need to allocate
>>> contiguous physical memory, recognize
>>The official C++ standard defines a std::vector as
>>contiguous memory are they wrong too?
> ****
> They require contiguous VIRTUAL memory. Essentially, they
> require contiguous memory in
> the environment in which the code is operating, and in the
> case of a virtual memory
> system, that means contiguous VIRTUAL memory, which
> probably is NOT contiguous physical
> memory!

OK. I also read the Wikipedia article that says a lot of
what you said. I never realized before that Virtual Memory
also handles memory fragmentation.

> The fact that you can even ask this question demonstrates
> that you learned NOTHING from
> the discussion of memory!
> joe
> ***
> Joseph M. Newcomer [MVP]
> email: newcomer(a)
> Web:
> MVP Tips: