From: Joseph M. Newcomer on 13 Apr 2010 21:16
Yes, I think it is my OCD kicking in. Maybe if I take these nice little pills the doctor
On Tue, 13 Apr 2010 01:17:52 -0400, Hector Santos <sant9442(a)nospam.gmail.com> wrote:
>Joe, lets not give this troll any further reason for living here.
>Ignore him. Obviously he is here because the Linux people already
>blew him off so lets do the same. I blocked his mail with Thunderbird
>and so far its working! I don't see him any more! The urge to jump in
>and comment on another moronic statement is gone! :)
>Joseph M. Newcomer wrote:
>> The nice thing is that it ups my posting count (and Hector's, and you've got quite a few
>> also). But if Microsoft just counts posting frequency, without looking at content, this
>> might be the first AI program to get an MVP award!
>> On Mon, 12 Apr 2010 14:29:31 -0400, "Pete Delgado" <Peter.Delgado(a)NoSpam.com> wrote:
>>> "Hector Santos" <sant9442(a)nospam.gmail.com> wrote in message
>>>> Live and learn. Which leads to the questions, if you are going to design
>>>> for Linux, then;
>>>> Why are you trolling in a WINDOWS development forum?
>>>> Why are you here asking/stating design methods that defies logic
>>>> under Windows when YOU think this logic is sound under UNIX?
>>>> If you are going to design for Windows, then you better learn how to
>>>> follow WINDOWS technology and deal with its OS and CPU design guidelines.
>>> You haven't yet figured out the riddle of Peter Olcott despite the repeated
>>> clues? When you look back at the posts after I tell you his little secret,
>>> it should become obvious and you should have one of those "Ah-ha!" moments.
>>> The truth of the matter is that there is no OCR technology at play here at
>>> all but rather AI technology. The secret is that Peter Olcott is really an
>>> AI program that is being entered to win the Loebner Prize.
>>> Let's look at the evidence again shall we?? Peter originally posed a
>>> question to the group. From each of the answers he recieved, his follow-up
>>> questions contained an amalgam of the original question and the resulting
>>> answer. In each case, the mixture could be made and perceived by humans to
>>> be reasonably logical because the original respondant had already considered
>>> the answer in the context of the original question. This is pretty common
>>> with many Turing Test style programs (mixture of question and response). I
>>> recall some of the games that I had back in the 80's that used this
>>> technique to appear intelligent.
>>> This also explains the magical morphing requirements and the circular
>>> reasoning being used quite nicely. Each time a post was made by the Peter
>>> Olcott program, it would incorporate the suggestions from previous posts by
>>> members of this group. The interesting thing about this particular Turing
>>> Test program is that if the group reached a consensus on a particular
>>> approach, the program would respond *against* the suggestion even after many
>>> attempts were made to justify the suggestion thus generating even more posts
>>> to the affirmative that the program could respond to. The architecture of
>>> this part of the "personality" was sheer genius because it simulates the
>>> average clueless programmer who has no motivation and below average
>>> Another clue must be the way the Turing Test program (Peter Olcott) fishes
>>> for additional posts by always responding to *every* post on *every* branch
>>> of a thread. The Turing Test program must make sure that its posts are the
>>> leaf on every branch in order to ensure that *someone*, *somewhere* will
>>> respond to it. Without responses, the machine is simply in wait state which,
>>> of course, means that the program has failed to convince humans that there
>>> is a human intelligence behind the posts.
>>> I had originally thought that the real "programmer" would come forth on
>>> April 1st and identify him/herself, but apparently the deception has gone so
>>> swimmingly well that testing will continue so long as you and Joe post to
>>> the threads. ;-)
>>> In an effort to find out who the real programmer was behind the Peter Olcott
>>> Turing Test machine, I consulted with the internet anagram server at
>>> http://wordsmith.org/anagram/ and typed in the name Peter Olcott in the
>>> hopes that the real culpret had simply tried to mask his identity. The
>>> responses included:
>>> Elect Pro Tot
>>> Creep Lot Tot
>>> Crop Let Tote
>>> Cop Letter To
>>> The bottom line is that while we may not know the true identity of the
>>> programmers behind the Peter Olcott hoax, it is possible that the internet
>>> anagram program may have come up with an appropriate response to his
>>> spamming of a Windows newsgroup with questions that are to be implemented on
>>> Linux... send a letter to a cop!
>>> PS: Can we all get back to real MFC programs and real MFC programmers now???
>> Joseph M. Newcomer [MVP]
>> email: newcomer(a)flounder.com
>> Web: http://www.flounder.com
>> MVP Tips: http://www.flounder.com/mvp_tips.htm
Joseph M. Newcomer [MVP]
MVP Tips: http://www.flounder.com/mvp_tips.htm
From: Joseph M. Newcomer on 13 Apr 2010 21:25
On Tue, 13 Apr 2010 09:34:24 -0500, "Peter Olcott" <NoSpam(a)OCR4Screen.com> wrote:
>"DanB" <abc(a)some.net> wrote in message
>> Pete Delgado wrote:
>>> You haven't yet figured out the riddle of Peter Olcott
>>> despite the repeated
>>> clues? When you look back at the posts after I tell you
>>> his little secret,
>>> it should become obvious and you should have one of those
>>> "Ah-ha!" moments.
>> And maybe Peter is just running surrogate? Data mining...
>> I have dumped much of this thread, just a lot of 'mine is
>> You may be right, but for the wrong target, yet right...
>> A human helper for the 'target' may be what this is all
>> about. As I recall, Peter is an old participant here. (My
>> bad if that ain't so.)
>> But this is not the Peter I remember....
>> So curious is this thread as I read what I can, as 'real
>> world' solutions have been made long ago.
>> Best, Dan.
>I have been here for many years. I am more ignorant of the
>topics being discussed now then prior topics in the past.
>Most of the prior topics in the past have been of the form
>what is the MFC syntax to do X?
Perhaps the fact that this newsgroup is called
is the reason for this...
>A question where a simple
>verifiably correct factual answer can be provided, thus no
>room for debate or judgment calls.
>In the case where there is huge room for debate and judgment
>calls I must have sound reasoning to verify the truth of the
>answer and the precise degree that the answer is
>appropriate. I had thought that people here continued to
>infinitely dance around providing this reasoning only to
No, we do it because we are lazy; I don't want to teach an entire course in queueing
theory to convey basic principles I largely know from years of experience. Nor do I want
to retype major parts of the MSDN when they are easily readable.
And we are just ROTFL over the unsupported explanations we see you give. So we keep you
around for entertainment value. We keep wondering just how far you can go in making
insane assertions that have no supporting evidence,
>Now it seems that the real reason is that they simply forgot
>what the original reasoning was, and instead use heuristics
>and design principles as their measure of validity. They
>danced around providing the reasoning to avoid the
>embarrassment of divulging that they simply forgot this
We apply hard-won lessons, and I don't feel like explaining 46 years of experience, one
line at a time, each time I'm asked a question. I don't need the math to re-derived the
queueing theorem about queues growing to infinite size when I already know that I've done
that (40 years ago, to pass a PhD qualifier) and all I need is the result. If you want to
understand it, go get a book on queueing theory. Learn what I learned.
Joseph M. Newcomer [MVP]
MVP Tips: http://www.flounder.com/mvp_tips.htm
From: Joseph M. Newcomer on 13 Apr 2010 21:35
On Tue, 13 Apr 2010 00:08:10 -0500, "Peter Olcott" <NoSpam(a)OCR4Screen.com> wrote:
>"Joseph M. Newcomer" <newcomer(a)flounder.com> wrote in
>> See below...
>> On Mon, 12 Apr 2010 18:43:38 -0500, "Peter Olcott"
>> <NoSpam(a)OCR4Screen.com> wrote:
>>>The scheduler pulls off the priority sorted jobs in
>>>order. Since there are no high priority Jobs the scheduler
>>>begins a 3.5 minute low priority job. Immediately
>>>this a high priority job arrives. After the 3.5 minute job
>>>completes, the scheduler begins the high priority job that
>>>has now exceeded its real-time threshold by a factor of
>> That's why I gave you the priority-inversion-prevention
>> algorithm, which is if you have N
>> handlers you never dispatch more than K < N large jobs,
>> which means that you have (N-K)
>> threads to handle the fast-turnaround jobs. But I guess
>> you missed that part.
>> And no thread priority fiddling is required to make this
>> work correctly!
>Then how is it that the high priority jobs would always get
>at least most of the CPU?
They will. And this can shut down your Web server, other critical services of the OS, and
you think this doesn't matter?
>> Pretty straightforward, easy to implement, easy to tune;
>> you decide what value K needs to
>> be to meet normal requirements. Simple, straightorward,
>> easy to implement. What's wrong
>> with it?
>The fact that I still can't see any improvement than my less
>complex strategy, and it still seems that my less complex
>strategy would have superior performance.
Actually, if you think having complex interprocess signaling mechanisms is "simpler", you
have defined "simpler" in a way we have been previously unaware. And you have no evidence
this actually has "superior performance", in fact, you have no evidence about performance
>>>I can neither prove nor disprove dogma, (there just isn't
>>>enough to work with) I can only work with reasoning.
>> Sadly, if I went back to my books on queueing theory, and
>> found the appropriate formulae,
>> I seriously doubt you could comprehend them. I suggest
>> that if you think your approach is
>> superior, you are guilty of presenting dogma, and you have
>> not proven it, nor have you
>> presented any sound reasoning to prove that it works
>> better than SQMS.
>Yes and if you don't still remember the details of this
>there is no way to explain these details. Could you maybe
>try to find me a good link, I don't know enough about this
>stuff to know a good link from a bad one.
I don't need to remember the details; I only have to remember the results. Sometimes you
need to do massive amounts of computation to derive a single bit, but once that bit is
derived, you can immediately map a set of input conditions to that bit without having to
work through the entire derivation. It is in this way, we derive patterns we know work,
and reject patterns we know don't work. Go get a book on queueing theory and study it. I
did, many years ago, and I have a set of known-good patterns and known-bad patterns, and
all I need to know are the patterns.
>> Something about pots and kettles comes to mind here,
>> something about color....
>> You are guilty of the same issues you are accusing me of.
>> You are using false assumptions
>> and perhaps the I Ching to "prove" MQMS is necessarily
>> better than SQMS, and your only
>> evidence seems to be "I'm a superb designer". I, at
>> least, have experience in queueing
>> theory and realtime embedded systems.
>I have seen no evidence showing that SQMS is better that how
>I implemented MQMS. Perhaps there are certain design
>principles and heuristics that tend to show this. I am
>totally unaware of these. Whenever I use any design
>heuristics or principles I endeavor to always find the
>reasoning behind them, that way I never apply them in the
>cases where they do not apply. When I do find this
>reasoning, I tend to discard the heuristic and principle in
>favor of this reasoning.
But you have been unable to show what MQMS performance is, given a particular mix of jobs.
I actually showed how to do the arithmetic, step-by-step, and showed that MQMS is going to
be slower than SQMS in a multijob scenario, and just plugged a few numbers into a
closed-form equation to get the results. You can do this, too. And you need to prove
your solution is better, since I already proved SQMS is better.
>>>If they work better then there is a reason why they work
>>>better if there is not a reason why they work better then
>>>they don't work better. Please provide the reasoning. As
>>>soon as I see sound reasoning that refutes my view, I
>> You have already pointed out that you have not read my
>> careful analysis, so why should I
>> reproduce it here? I wrote it once already.
>I never saw it and there was a whole day that I ignored half
>of your messages because I was so annoyed with you. Now that
>I think that I may have inferred the root cause of this
>annoyance, there is no reason for this annoyance to continue
That is not my problem. That is your problem. Go back and read them. I am not
responsible for your failure to read replies.
>Could you find this message and tell me the time and date? I
>will read it.
No, I leave this as an Exercise For The Reader.
Joseph M. Newcomer [MVP]
MVP Tips: http://www.flounder.com/mvp_tips.htm
From: Peter Olcott on 13 Apr 2010 22:14
"Joseph M. Newcomer" <newcomer(a)flounder.com> wrote in
> See below....
> On Tue, 13 Apr 2010 00:08:10 -0500, "Peter Olcott"
> <NoSpam(a)OCR4Screen.com> wrote:
>>Yes and if you don't still remember the details of this
>>there is no way to explain these details. Could you maybe
>>try to find me a good link, I don't know enough about this
>>stuff to know a good link from a bad one.
> I don't need to remember the details; I only have to
> remember the results. Sometimes you
> need to do massive amounts of computation to derive a
> single bit, but once that bit is
> derived, you can immediately map a set of input conditions
> to that bit without having to
> work through the entire derivation. It is in this way, we
> derive patterns we know work,
> and reject patterns we know don't work. Go get a book on
> queueing theory and study it. I
> did, many years ago, and
> I have a set of known-good patterns and known-bad
> patterns, and
> all I need to know are the patterns.
Not whether or not these patterns still apply after decades
of technological advances, or whether or not they apply in a
particular situation? That might even work OK as much as
most of the time, it is highly doubtful that this would
always work well.
From: Jerry Coffin on 14 Apr 2010 03:49
In article <XIqdne3OYaoRX1nWnZ2dnUVZ_qOdnZ2d(a)giganews.com>,
> "Jerry Coffin" <jerryvcoffin(a)yahoo.com> wrote in message
[ ... The buffer built into the hard disc: ]
> Because of required fault tolerance they must be immediately
> flushed to the actual platters.
> > Though it depends somewhat on the disk, most drives store
> > enough power on board (in a capacitor) that if the power dies,
> > they can still write the data in the buffer out to the platter.
> > As such, you generally don't have to worry about bypassing it to
> > assure your data gets written.
> When you are dealing with someone else's money (transactions
> are dollars) this is not recommended.
[ ... ]
> Buffer must be shut off, that is exactly and precisely what
> I meant by [all writes are forced to disk immediately].
Quite the contrary. Disabling the buffer will *hurt* the system's
dependability -- a lot. The buffer allows the disc to use an elevator
seeking algorithm, which minimizes head movement. The voice coil that
drives the head is relatively fragile, so minimizing movement
translates directly to reduced wear and better dependability.
Disabling the buffer will lead almost directly to data corruption.
None of the banks, insurance companies, etc., that I've worked with
would even *consider* doing what you think is necessary for financial
[... hard disc seek times rated on 1/3 disc movement instead of 1/2]
> Then they are liars and should be sued.
They're not liars at all -- though the fact that they're rating based
on 1/3 stroke instead of full stroke is usually well hidden in really
fine print. If you want to sue them, go right ahead though.
> > Second, you aren't taking the rotational latency of the
> > disk into account. The fastest drives I know of today spin at
> > 15,000 RPM. That equates to 4 ms per rotation. Under normal
> > circumstances, you have to wait for approximately half a rotation
> > (on average) to get to the
> I think that the figure that I quoted may have already
> included than, it might really be access time rather than
> seek time. I am so unused to the c/c++ library lseek and
> fseek meaning that, that I may have related the incorrect
I doubt it -- for the sake of "nice" numbers, most drive
manufacturers like to quote the fastest sounding things they can. In
fact, they'll often quote the time only for the actual head movement,
even leaving out the time for the controller hardware to translate
the incoming command into signals sent to the servo (which makes no
real sense at all, since there's no way for you to actually bypass
[ ... ]
> In any case access time still looks like it is the binding
> constraint on my TPS.
Perhaps -- and perhaps not. Right now, the "binding constraint" isn't
access time; it's complete lack of basis for even making informed
guesses. I'll say it again: for your guesses to mean anything at all,
you need to put together at least a really minimal system and do some
measurements on it. Without that, it's just hot air.