From: Hector Santos on
Peter Olcott wrote to Joe:

> I am primarily an abstract thinker, I almost exclusively
> think in terms of abstractions. This has proven to be both
> the most efficient and most effective way to process complex
> subjects mentally.

No. It called being simple minded!

> I scored extremely high on an IQ test in
> this specific mode of thinking, something like at least
> 1/1000. My overall IQ is only as high as the average MD
> (much lower than 1/1000).

But an MD understands tools and system process (body parts)
integrations concepts. He knows what triggers what. He understands
how a Human Body is engineered.

You don't have that level of common sense engineering.

I would not be surprised if your
> overall IQ is higher than mine, actually I would expect it.
> Your mode of thinking and my mode of thinking seem to be at
> opposite end of the spectrum of abstract versus concrete
> thinking.

In other words, you lack common sense and logic. Basically what you
are showing is that IQ on Windows is ZERO!

From: Peter Olcott on

"Joseph M. Newcomer" <newcomer(a)> wrote in
message news:uk12r5die24557r1evjkvrqovj58qjvc9e(a)
> See below...
> On Sun, 28 Mar 2010 23:03:42 -0500, "Peter Olcott"
> <NoSpam(a)> wrote:
>>>But you can run
>>> with virtual memory WITHOUT paging being a part of it,
>>How is that not like running an automobile's engine
>>any form of fuel?
> ****
> Because you are being dense and stupid. For example, a
> better analogy would be running an

Calling me stupid (an opinion) demonstrate a lack of
professionalism (a fact).

> automobile engine without transferring the power to the
> wheels. Running the engine is a
> valid act; whether or not that power is transferred to
> something that generates motion is
> a second-order effect.

OK granted you analogy is better than mine. Yet even your
analogy still makes my point. The essential purpose of an
automobile is to drive around. Running the engine in neutral
is not meeting the essential purpose of an automobile.
Likewise for virtual memory. Virtual memory without page
faults is like running and engine in neutral not meeting the
essential purpose.

Is the car driving around with the engine in neutral, No.
The practical effect of running a car with its engine in
neutral is the same practical effect as if the car is turned
off when measured against the essential purpose of a car,
driving around.

Is the Virtual Memory meeting its essential purpose with
zero page faults, No. Is the practical effect of VM without
pages faults essentially the same as if VM was turned off in
terms of it fulfilling its essential purpose, yes.

So for short hand notation a car turned off or a car with
its engine in neutral could be considered essentially the
same in terms of the car fulfilling its essential purpose,
driving around.

> ****
>>Please try to be very concise.
> ****
> Concise: you are stupid. Slightly longer: virtual memory
> is a system that maps virtual

You ramble far too much around the point without ever
actually getting to the point. I could make up all kinds of
childish insults to address this, but, that would be both
unprofessional and inaccurate.

> addresses to physical addresses (which I have said several
> times, and I know of no reason
> you would ignore this other than being stupid). Even
> longer: one of the possible mappings
> is etended to include the value "not in memory". which
> induces paging. But if there are
> no page faults, virtual memory is still running (which
> Hector and I both told you, and
> which Hector pointed you at a Mark Russinovich article, in
> which he explicitly stated that
> virtual memory is ALWAYS running). Realy concise:
> CR0<0>==1.
> ****
>>> and paging is an add-on that every
>>> virtual memory system has used since the ATLAS computer
>>> in
>>> 1961, because main memory has
>>> ALWAYS been the expensive bottleneck and therefore is
>>> always oversubscribed. But
>>> you lost
>>> the notion that virtual memory provides private address
>>> spaces to processes
>>How can this work without paging?
> ****
> How stupid do you have to be to keep ignoring what I have
> explained in detail time after
> time? Don't accuse us of simply doing "refute, refute,
> refute" when you seem to suffer
> from the same problem! And don't tell me I didn't tell
> you where to look! I gave you a
> citation to the actual Intel manual, to chapter and pages
> You DID go read that, didn't
> you? If you didn't read it, why are you telling me that
> my definition of virtual memory
> is flawed? I used my first VM system in 1967 (43 years
> ago) and have been using it on
> nearly every machine I've worked on since, and you are
> telling me that you are an expert
> on what it is? When you make such obviously erroneous
> statements, and ask the same
> question over and over, even after it has been explained
> to you why you are wrong? Get a
> life! Either you LEARN about this from the actual sources
> (the Intel manuals) or you shut
> up and accept the explanations the experts give you,
> because at the moment, you lack every
> conceivable basis for making your ridiculous statments.
> And you can't even produce a
> decent analogy!
> ****
>>Whichever one can work without the other is the more
>>essential one.
> ****
> What part of "virtual memory is a system that maps virtual
> addresses to physical
> addresses" did you fail to comprehend? And what part of
> "Virtual memory systems use
> paging as a mechanism to allow the oversubscription to
> physical memory" did you fail to
> comprehend?
> Virtual memory will run WITHOUT paging, but then you are
> not permitted to have the set of
> running processes oversubscribe phyiscal memory. You
> already told us that your process
> runs without paging, and that is evidence that virtual
> memory operates without paging! So
> given your effort to prove paging is not required for
> virtual memory, why is you insist
> now that it is necessary for virtual memory? Of course,
> if you errneously persist in
> defining the term "virtual memory" with "memory that
> requires paging", then your axiom is
> failed, and all conclusions you reach from the failed
> axiom are erroneous. You are
> flat-out WRONG about what defines "virtual memory" and it
> is time you accept this and work
> with the REAL definition of what "virtual memory" actually
> is, not your fantasy
> definition.
> joe
> ****
> Joseph M. Newcomer [MVP]
> email: newcomer(a)
> Web:
> MVP Tips:

From: Peter Olcott on

"Joseph M. Newcomer" <newcomer(a)> wrote in
message news:s5n2r5lj03ktt52logni11mhgsudskmkgn(a)
> See below...
> On Mon, 29 Mar 2010 16:35:54 -0500, "Peter Olcott"
> <NoSpam(a)> wrote:
>>It is not that I am stupid it is that for some reason (I
>>thinking intentionally) you fail to get what I am saying.
>>I needed something the PREVENTS page faults**, MMF does
>>do that, VirtualLock() does.
> ****
> I never promised that memory mapped files would give you
> ZERO page faults; I only pointed
> out that it can reduce the total number of page faults,
> and distribute the cost of them
> differently than your simplistic model that takes several
> thousand page faults to load the
> data. And I said that in any real world experiment, it is
> essential to gather the data to
> show the exact impact of the architecture.
> ****

The whole goal of the original architecture was to find a
way to prevent pages faults to my several GB of data because
it would have taken several minutes to load this data, thus
it was to be kept loaded. All this time I am sure that you
were aware of VirtualLock() but said nothing.

I kept saying that I need zero page faults and you kept push
MMF as the solution. I kept saying that MMF are no good
because they don't prevent pages faults and I need something
that prevents pages faults, and you kept push MMF all the
while knowing about VirtualLock().

In any case all of this is now moot. I was finally able to
benchmark the model of my new process and it requires so
much less data that the cache spatial locality of reference
provides a ten-fold improvement to speed. This provides
plenty of time to load this much smaller data on the fly,
and still provide the 100 ms response time. (The other 400
ms was for internet time).

>>A thing can ONLY be said to be a thing that PREVENTS page
>>faults if that thing makes pages faults impossible to
>>by whatever means.
> ****
> That's assuming you establish that "zero page faults" is
> essential for meeting your
> high-level requirement. You have only said that if you
> have tens of thousands of page
> faults, you cannot meet that requirement, and if you have
> zero, you have no problem. You
> have not established if the limit of page faults is zero,
> one hundred, two hundred and
> thirty-seven, or three thousand. All you know is that
> your initial startup is massively
> slow and you attribute this to the large number of page
> faults you see. You may be
> correct. But without detailed analysis, you have no basis
> for making the correlation.
> ****
>>It is like I am saying I need some medicine to save my
>>and you are say here is Billy Bob he is exactly what you
>>need because Billy Bob does not kill people.
> *****
> No, but if you want "medicine to save your life" do you
> take 5mg once a day or 100mg ten
> times a day? We are not talking alternatives here, but
> dosages. And do you take it with
> some sort of buffering drug to suppress side effects, or
> take it straight (try sitting
> with a friend undergoing chemo, for three hours each trip,
> and drive him home, and you
> will appreciate what buffering means). You have only
> established two extreme endpoints
> without trying to understand what is really going on. Do
> you have detailed performance
> measurement of the internals of your program? (If not,
> why not?) Or do you optimize
> using the by-guess-and-by-golly method that people love to
> use (remember my basic
> principle, derived over 15 years of performance
> measurement: "Ask a programmer where the
> performance bottleneck is and you will get a wrong
> answer"? That principle NEVER failed
> me in 15 years of doing performance optimization). You
> actualy DON'T have any detailed
> performance numbers; only some gueses where you have
> established two samples and done
> nothing to understand the values between the endpoints!
> This isn't science, this is as
> scientific as tossing darts over your head at the listing
> behind you and optimizing
> whatever subroutine the dart lands on.
> ****
>>Everyone else here (whether they admit it or not) can also
>>see your communication errors. I don't see how anyone as
>>obviously profoundly brilliant as you could be making this
>>degree of communication error other than intentionally.
> ****
> When I tell you "you are using the language incorrectly"
> and explain what is going on, and
> give you citations to the downloadable Intel manual, I
> expect that you will stop using the
> language incorrectly, and not continue to insist that your
> incorrect usage is the correct
> usage. You foolishly think that "virtual memory"
> necessarily means "paging activity", and
> in spite of several attempts by Hector and me to explain
> why you are wrong, you still
> insist on using "virtual memory" in an incorrect fashion.
> Where is the communication
> failure here? Not on my side, not on Hector's side (he
> pointed you to the Russinovich
> article). And then you come back, days later, and STILL
> insist that "virtual memory" ==
> "paging activity" it is really hard to believe we are
> talking to an intelligent human
> being. And you still don't have any data to prove that
> paging is your bottleneck, or to
> what degree it is a problem. Instead, you fall back on
> somebody's four-color marketing
> brochure and equate "meeting a realtime window" (and a
> HUGE one) with "absolute
> determinism", which sounds more like a philosophical
> principle, and insist that without
> absolute determinism you cannot meet a realtime window,
> which I tried to explain is
> nonsense. Paging is only ONE possible factor in
> performance, and you have not even
> demonstrated that it matters (you did demonstrate that
> running two massive processes on a
> single core slows things down, which surprises no one).
> ****
>>Notice this I am not resorting to ad hominem attacks.
> ****
> I've given up trying to be polite. It didn't work. If I
> explain something ONCE and you
> insist on coming back and saying I'm wrong, and persist in
> using technical language
> incorrectly, try to justify your decisions by citing
> scientifically unsupportable
> evidence, tell use we don't know what we're talking about
> when you have expended zero
> effort to read about what we've told you, you are not
> behaving rationally.
> Learn how to do science. Learn what "valid experiment"
> means. Learn that "engineering"
> means, quite often, deriving your information by
> performing valid experiments, not
> thinking that real systems are perfect reflections of
> oversimplified models described in
> textbooks, and that you can infer behavior by just
> "thinking" about how these systems
> work. This ignores ALL good priniciples of engineering,
> particularly of software
> engineering: build it, measure it, improve it. And by
> EXPERIMENTS!" You have run two that do not not give any
> guidance to optimization, just
> prove that certain extreme points work or don't work.
> Guy L. Steele, Jr., decided that he needed to produce a
> theoretical upper bound on sorting
> (we know the theoretical lower bound is O(n log n). He
> invented "bogo-sort", which is
> essentially "52-pickup". What you do is randomly exchange
> elements of the array, then
> look at it and see if it is in order. If it is in order,
> you are done, otherwise, try the
> random rearrangement again until the vector is in sorted
> order.
> So you have done the equivalent of running qsort (n log n)
> and bogo-sort and this tells
> you nothing about how bubble sort is going to perform.
> You ran an experiment that
> overloaded your machine, and one which had zero page
> faults, and from this you infer that
> ANY paging activity is unacceptable. This is poor
> science, and anyone who understands
> science KNOWS it is poor science. Until you have
> determined where the "performance knee"
> is, you have NO data, nor do you know where your problems
> are, nor do you know where to
> optimize. SO you take a simplified model, run a single
> test, and you have STILL not
> derived anything useful; in fact, your current model is
> subject to priority inversion and
> does not guarantee maximum throughput, even in the ABSENCE
> of page faults. For those of
> us who spent years optimizing code, this is obvious, and
> I've tried to tell you that your
> data is bad, and instead of listening, you insist that
> your two extreme points are the
> only points you need to understand what is going on. Not
> true, not true at all.
> joe
> joe
> ****
> Joseph M. Newcomer [MVP]
> email: newcomer(a)
> Web:
> MVP Tips:

From: Peter Olcott on

"Joseph M. Newcomer" <newcomer(a)> wrote in
message news:h4p2r55emhlqcnvootbfd4t5h6upchiqn6(a)
> See below...
> On Mon, 29 Mar 2010 12:12:13 -0500, "Peter Olcott"
> <NoSpam(a)> wrote:
>>> Yout total cluelessness about TCP/IP comes to the fore
>>> again. Suppose you have
>>> established a connection to the machine. The machine
>>> reboots. What happened to that
>>> connection? Well, IT NO LONGER EXISTS! So you can't
>>> reply over it! Even if you have
>>> retained the information about the data to be processed,
>>False assumption. A correct statement would be I have no
>>to communicate to the client that you are ware of (see
> ****
> Email is not the same thing as getting the result back
> from the server. And users will
> not expect to get email if they get a "connection broken"
> request unless you tell them,
> and this requires a timeout a LOT larger than your 500ms
> magic number.
> ****

Sure I know this there is no possible way in the whole
universe that an email could be sent essentially in parallel
to the post back to the client screen. The entire universe
would crumble to pieces if this ever happened. This greater
than timeout of 500 ms is absolutely and positively required
and there exists no possible way that anyone could ever
possibly think of (such as parallel lines of communication)
around this problem.

>>> In what fantasy world does the psychic plane allow you
>>> to
>>> magically
>>> re-establish communication with the client machine?
>>That one is easy. All users of my system must provide a
>>verifiably valid email address. If at any point after the
>>client request if fully received the connection is lost,
>>output is sent to the email address.
> ****
> Which violates the 500ms rule, by several orders of
> magnitude.
> I'm curious how you get a "verifiably valid" email
> address. You might get AN email
> address, but "verifiably valid" is a LOT more challenging.
> THere are some hacks that
> increase the probability that the email address is valid,
> but none which meet the
> "verifiably valid" criterion.

They can't login until they click on a link that is sent by
email. The link will also verify that a human is reading the

> ***
>>> And don't tell me you can use the IP address to
>>> re-establish connectivity. If you don't
>>> understand how NAT works, both at the local level and at
>>> the ISP level, you cannot tell me
>>> that retaining the IP address can work, because I would
>>> immediately know you were wrong.
>>> ****
>>>>The key (not all the details, just the essential basis
>>>>making it work) to providing this level of fault
>>>>is to have the webserver only acknowledge web requests
>>>>the web request have been committed to persistent
>>> ****
>>> Your spec of dealing with someone pulling the plug, as I
>>> explained, is a pointless
>>> concern.
>>And I have already said this preliminary spec has been
> ****
> So what is it? How can we give any advice on how to meet
> a spec when we don't even know
> what it is any longer?
> ****
>>> So why are you worrying about something that has a large
>>> negative exponent in
>>> its probability (1**-n for n something between 6 and
>>> 15)?
>>> There are higher-probability
>>> events you MIGHT want to worry about.
>>> ****
>>>>The only remaining essential element (not every little
>>>>detail just the essence) is providing a way to keep
>>>>web requests to make sure that they make it to completed
>>>>status in a reasonable amount of time. A timeout
>>>>and a generated exception report can provide feedback
>>> ****
>>> But if you have a client timeout, the client can
>>> resubmit
>>> the request, so there is no need
>>> to retain it on the server. So why are you desirous of
>>> expending effort to deal with an
>>> unlikely event? And implementing complex mechanisms to
>>> solve problems that do not require
>>Every request costs a dime. If the client re-submits the
>>same request it costs another dime. Once a request is
>>explicitly acknowledged as received, the acknowledgement
>>response will also inform them that resubmitting will be
>>incur an additional charge.
> ****
> Oh, I get it, "we couldn't deliver, but we are going to
> charge you anyway". Not a good
> business model. You have to make sure that email was
> received before you charge. Not

As a third line of communication the result will be
available when they log back in, along with their entire
transaction history. Results can be kept for a month.

Isn't there a bounce process for email that is not
delivered? How reliable is this bounce process?

> easy. We got a lot of flack at the banking system when we
> truncated instead of rounding,
> which the old system did, and people would complay that
> they only got $137.07 in interest
> when they expected to get $137.08. And you would not
> BELIEVE the flack we got when we had
> to implement new Federal tax laws on paychecks, and there
> were additional "deductions"
> (the pay was increased by $0.50/payroll to cover the $0.50
> additional charge the
> government required, but again the roundoff meant we were
> getting complains from people
> who got $602.37 uner the new system when under the old
> hand-written checks they got
> $602.38. So you had better be prepared, under faiilure
> scenarios, to PROVE you delivered
> the result they paid for, even for $0.10, because SOMEBODY
> is going to be tracking it!
> It will be LOTS of fun!
> *****
>>> solution on the server side? And at no point did you
>>> talk
>>> about how you do the PayPal
>>> credit, and if you are concerned with ANY robustness,
>>> THAT's the place you have to worry
>>> about it!
>>> And how does PayPal and committed transactions sit with
>>> your magical 500ms limit and the
>>> "no paging, no disk access, ever" requirements?
>>> ****
>>>>Please make any responses to the above statement within
>>>>context of the newly defined much narrower scope of
>>> ****
>>> If by "fault tolerance" you mean "recovering from
>>> pulling
>>> the plug from the wall" my
>>No not anymore. Now that I have had some time to think
>>fault tolerance (for the first time in my life) it becomes
>>obvious that this will not be the benchmark, except for
>>initial request / request acknowledgement part of the
> ***
> So what IS your requirements document? SHOW US!
> ****
> Joseph M. Newcomer [MVP]
> email: newcomer(a)
> Web:
> MVP Tips:

From: Hector Santos on
Peter Olcott wrote to Joe:

>> ****
>> Oh, I get it, "we couldn't deliver, but we are going to
>> charge you anyway". Not a good
>> business model. You have to make sure that email was
>> received before you charge. Not
> As a third line of communication the result will be
> available when they log back in, along with their entire
> transaction history. Results can be kept for a month.

In other words Joe, no refunds! <g>

> Isn't there a bounce process for email that is not
> delivered? How reliable is this bounce process?

It depends. If the remote email server thinks you are behaving like a
bad guy and/or spammer, you might not get a bounce.

But there are two kinds of notifications.

1) Instant rejections with a response code:


2) Accept Message than bounce after the SMTP session ends

250 Message Accep

server processes msg, sees bad msg, Bounce msg

So if you are sending them an email and before you can TRANSFER the
payload (the text), the remote system can rejects your ENVELOPE
information with a 55x or 45x, and that satisfies a NOTIFICATION

However, if the remote accepts the message and gives you an message
acceptance reply code 250, it could reject it after the session ends.
This traditionally requires a bounce and SHOULD NOT DISCARD IT!

But because of the Accept/Bounce Spoof Attacks, the requirements have
been relaxed in the most recent IETF RFC 5231 specification so that
local operator policy dictates what it can do and he may SILENTLY

Before RFC 5321, it was a "RULE" to send notifications if a server
can't deliver. By US ECPA, user expectations require a notification.
With RFC 5321 due to spam it was officially made "OPTIONAL".

If they think your are not a spammer or bad guy, then MOST systems
will follow tradition and send you a bounce notification.

Some systems will even CHECK you to make sure you are forever. So you
better have a reliable SMTP server yourself before you even think
about sending mails to people and they can't bounce mail back to you!