From: Joseph M. Newcomer on
Some years ago I was teaching a course about Window programming, in Win16, and I was
talking about how the DS: register had to be set for DLLs. Some nitwit in the audience
raised his hand and said "This is clearly a consequence of using the brain-dead Intel chip
set instead of a flat model like the 68000". My respons was "Clearly, you are a person
who not once ever wrote a program for the Macintosh". He admitted he hadn't. I then
explained that in a DLL on the Mac, it was essential that the R5 register be set to point
to the DLL's data, and I failed to see whether it mattered whether the register was called
DS: or R5. You see, I HAD written code for the Mac, including DLLs.

Do you know why IBM chose the Intel chip instead of the Motorola chip? It turns out they
wanted the 68000 family, so IBM went to Motorola and said "We need K thousand chips per
month, can we get them?" Motorola, who was busy supplying Apple, said "No way" and Intel,
a little, barely viable chip company, said "Sure. Will that be enough or will you need
more?" So they chose the company that said it could deliver product. It wasn't based on
the architecture, it was based on the availability.

Note that in those days, there was a belief that you wouldn't need more than 64K data for
real programs, or 64K code. Sadly, real programs didn't conform to these toy models, and
Microsoft quickly had to add medium, large and huge models.
joe

On Fri, 26 Mar 2010 19:05:19 -0400, Hector Santos <sant9442(a)nospam.gmail.com> wrote:

>Peter Olcott wrote:
>
>>> If I had to guess, the reason why you don't understand any
>>> of this is because you are clueless of the history of the
>>> INTEL chip starting with its Memory Segmentation Model to
>>> the introduction of Real Mode vs Protected Model hardware
>>> and operating systems, starting with DMPI.
>>
>> Introduced with the 80386, with the 80286 being the prequel.
>
>
>That is for the massive, Intel always had a preemptive mode CPU system
> starting with the 8086. But only their OS and software offered it
>full power. Windows didn't catch up until Windows 3.0 and before that
>the concentration was with the Microsoft/IBM OS/2 joint venture.
>
>Even so, the main point is DO YOU UNDERSTAND IT, more specifically the
>Memory Segmentation Model and Preemptive Thread - NOT Task - Switching?
>
>It is critically important because in the days where it was a
>consideration, the decision battle was between a Flat Memory Model
>that Motorola Chips offered and the Segmented Memory Model that Intel
>Chips offered - the two CPU vendors at the time for the two top
>microcomputer vendors:
>
> APPLE Macs - Motorola
> IBM/CLONE PC - Intel
>
>At the time, Apple and developers who wrote for Apple didn't have to
>worry about Memory Models to compile for. For Intel, you compile and
>developed for SMALL, MEDIUM, LARGE and HUGE memory model compilations.
>
>When DPMI came, you were able to address even larger data models. But
>it came at a price with the heavy context switching between real and
>protected mode. One of the OS2/2 engineers once said it was like
>driving a Jaguar at 120 mph coming to a complete halt and starting
>again at 120 mph. But the OS and CPU did it so fast that its all
>appeared transparent to the Peter's of the world like its all in
>memory all the time - the same erroneous perception you have today.
>
>Finally, with 32 bit compilers coming along, the idea of compiling
>SMALL, MEDIUM, LARGE and HUGE memory models simply didn't apply
>anymore. It was all one model - huge.
>
>But today, HUGE is relative. Now you need more and now you need
>additional helper technology to achieve this in efficient manner.
>
>What is been told to you that this helper technology needs to be
>PROGRAMMED. You just can't compiled a straight forward code with
>these huge memory needs and HOPE that by buying MORE MEMORY and MORE
>CORES that it will address your requirements.
>
>So unless YOU PROGRAM IN THE HELPER TECHNOLOGY, you will not get the
>benefits of your machine.
Joseph M. Newcomer [MVP]
email: newcomer(a)flounder.com
Web: http://www.flounder.com
MVP Tips: http://www.flounder.com/mvp_tips.htm
From: Hector Santos on
Joseph M. Newcomer wrote:

> ****
>> KIDDIE STUFF!
> ****
> Fault tolerance is not kiddie stuff, it is HARD. And it COSTS! It costs in complexity,
> and it costs in performance. And it impacts lots of pieces of the world. But he acts as
> if he can build something that adds a completely inconsistent requirement and just
> magically make it happen, with no effort and without any performance impact. I've talked
> to the people who build fault tolerance, including the people who have invented transacted
> file systems. I've sat through their lectures. This stuff is nontrivial, but Peter
> thinks it is going to happen if we waves his hands in some magical way. And has said
> explicitly that he wants to compromise reliability for performance, and is unwilling to
> compromise reliability no matter what the performance cost. Say what?
> joe

Its kiddie stuff in the context that he doesn't know the basics to
engineer software to help toward building for fault tolerance and
redundancy. SQL for a FIFO Request Processing Queue does not lend
itself redundancy and/or restarts unless heavy flushing is involved,
and as I said, in lieu of using Flash Memory, SQL or not, any
flushing which is to only way to minimize data lost. negates his no
page fault theories.

As you and I both said, can't have it both ways. The thing is Joe, he
doesn't need high level designs. His application is simple, far less
demanding then most applications.

He would be better off with an FTP server! :)

--
HLS
From: Joseph M. Newcomer on
My servers are all on UPS units and get notification if power is going to fail in the near
future; robust code handles WM_POWERBROADCAST messages. No real system ever has its power
drop abruptly, unless a real amateur has a computer that is plugged into the wall
directly. Given a single-computer UPS costs < US$100, there is little excuse for not
having one I have three 1500KVA UPS units powering my entire site (including client
machines, servers, routers, hubs, etc.) and my wife has a 1500KVA UPS unit protecting her
system. I also have RAID 5 on each of my two clustered doman controller servers. No ISP
fails to provide MVA of UPS support. My ISP can ride out eight hours of power failure,
just on their massive UPS system, and when they install the diesel generator backup unit
they will be able to run indefinitely (it may be installed by this point). My goal over
the next 12 months is a 10KW emergency generator for the office. (About US$3,000, fully
installed) The UPS units will provide power while it is coming up, which takes a few
seconds. So I will have a true 24/7 operation independent of the incoming electrical
service (in the last blizzard, tens of thousands of customers in our area were without
power for over a week; we never lost power, but I consider that just good fortune)

So the whole idea of robust-under-catastrophic-power-failure never happens in practice.
SImple WM_POWERBROADCAST triggered by the UPS interface which allows for graceful shutdown
and recovery. THe real catastrophic failures are memory parity errors, programming errors
that cause access faults, and killing the program from Task Manager.
joe

On Sun, 28 Mar 2010 22:21:05 -0400, Hector Santos <sant9442(a)nospam.gmail.com> wrote:

>Joseph M. Newcomer wrote:
>
>>> When I speak of fault tolerance and I talking about yanking
>>> the power code at any point during execution.
>> ****
>> This doesn't happen. Not in the real world. And power in what? And you have still
>> failed to default "fault" and "tolerance".
>> ****
>
>
>He really meant was not paying this electric bill and his power is cut
>off. :)
Joseph M. Newcomer [MVP]
email: newcomer(a)flounder.com
Web: http://www.flounder.com
MVP Tips: http://www.flounder.com/mvp_tips.htm
From: Peter Olcott on

"Joseph M. Newcomer" <newcomer(a)flounder.com> wrote in
message news:u29vq5le7ktcneul7ff80jqiu53dc9vqqp(a)4ax.com...
> See below...
> On Sat, 27 Mar 2010 22:42:12 -0500, "Peter Olcott"
> <NoSpam(a)OCR4Screen.com> wrote:
>
>>
>>"Joseph M. Newcomer" <newcomer(a)flounder.com> wrote in
>>message news:31htq5tcpos7o7vah75d0tp02p6768q8ai(a)4ax.com...
>>> See below...
>>> On Fri, 26 Mar 2010 22:44:24 -0500, "Peter Olcott"
>>> <NoSpam(a)OCR4Screen.com> wrote:
>>>>OK so I am convinced. It is a problem that I do have to
>>>>solve. I shouldn't have skipped that last half of the
>>>>book,
>>>>but, I was going for maximum grades, and instead focused
>>>>my
>>>>time elsewhere. I still have the original copyright 1985
>>>>book, will this be good enough or could I learn more
>>>>efficiently elsewhere?
>>>> Operating System Concepts by Peterson and Silberschatz
>>>>
>>> ****
>>> Almost every introduction-to-operating-systems book
>>> gives
>>> only general principles and
>>> simplified-for-students overviews. Real systems are
>>> never
>>> as simple as the textbooks
>>
>>But then (I guess I will ask explicitly) How much has
>>this
>>stuff changed in 25 years?
>>My book is 25 years old (it was new when I took the
>>course).
> ***
> As "general principles", nothing much has changed. In the
> infinite details of real
> operating systems, they have changed a LOT. For example,
> did you know that Vista and Win7
> have new kernel storage allocators that are lock-free and
> multiprocessor-safe? The
> technique, which I read about on some Microsoft blog, is
> called "speculative allocation".
> You won't find that in your textbook.
>
> HUNDREDS of details like this are the difference between
> textbook descriptions and real
> systems. And each system evolves its own techniques.
> Look at the earliest file systems
> (NFS) vs. the later, secured file systems (AFS, done at
> CMU in the late 1980s). Not at
> all the same; yet unless you read the AFS papers, you will
> not learn how they did it, what
> problems they had to solve, etc. (for example, AFS used
> Kerberos security to protect the
> data flowing on the network, and that had a cost)
> joe
>
>>
>>The only thing that I really need to know about VM right
>>now, is how to minimize its impact on my performance.
> ****
> Something we have been trying to explain to you for close
> to two weeks.
> ****
>>
>>#include <sys/mman.h>
>>int mlock(const void *addr, size_t len);
>>int munlock(const void *addr, size_t len);
>>int mlockall(int flags);
>>int munlockall(void);
>>
>>Is there anything like this in Windows?
> ****
> How would I know? I have no idea what that is supposed to
> do. Seeing an interface spec
> by itself conveys no useful information. This looks
> suspiciously like a locked memory
> manager, and yes, Windows has it at application level, and
> I just told you the kernel has
> a whole new method called "speculative allocation" that
> eliminates the cost of
> multiprocessor and multithread memory locks. So why
> should I care about that header
> file?
>
> Note that header files in the sys subdirectory are usually
> platform-specific
> application-level files. And solve platform-specific
> issues. It doesn't mean they are
> for the operating system code itself. So the name of the
> header file conveys nothing. So
> unless you tell me what those functions are supposed to
> do, I can't tell you if there is
> an equivalent in Windows!
>
> Perhaps you are referring to VirtualLock, which is a
> pretty obvious API if you simply read
> the MSDN, but has its own problems in terms of usability.
> Read the fine print, and the
> fine print on SetProcessWorkingSetSize. It might improve
> your program's performance, at
> the expense of eveything else (including the Web server
> you are using, thus degrading your
> respons time). Maybe it will work, maybe not. This is
> why you have to run actual
> experiments on the configuration you are going to use, not
> just try to guess at what might
> happen! Or draw conclusions from experiments unrelated to
> what you might actually do (for
> example, coming to a conclusion about multithreading by
> running multiple processes, which
> is clearly an invalid experiment to predict multithreaded
> behavior, which we have been
> trying to tell you!)

OK great we have reached a crucial point of mutual
agreement. I will continue to assume that my assumptions are
correct for now because limited testing shows this to be
true. I will derive various real-world stress test scenarios
and see what happens.

>
> As I pointed out, in your fantasy world, the ONLY process
> you consider is your app
> running, and you ignore forms of reality such as (a) the
> file system is written in terms
> of paging (b) the executable image is handled by
> memory-mapped files paging in pieces of
> the executable image (c) there are other processes, owned
> by the OS, that are running (d)
> parts of the OS itself are paged. But hey, why should you
> let reality intrude into your
> fantasy? it is, after all, YOUR fantasy, and it can be
> anything you want it to be (the
> fact that it becomes more and more distant from reality is
> not a major concern here,
> obviuosly)
> joe

It continues to contradict any sort of reasonable hypothesis
that plenty of extra RAM will not always work, (to prevent
page faults) but, I will stress test this assumption to its
limits.

>
> ****
>>
> Joseph M. Newcomer [MVP]
> email: newcomer(a)flounder.com
> Web: http://www.flounder.com
> MVP Tips: http://www.flounder.com/mvp_tips.htm


From: Peter Olcott on

"Joseph M. Newcomer" <newcomer(a)flounder.com> wrote in
message news:afbvq51msk3ch3qottars4ebnpafktmu5h(a)4ax.com...
> See below...
> On Sun, 28 Mar 2010 08:07:58 -0500, "Peter Olcott"
> <NoSpam(a)OCR4Screen.com> wrote:
>
>>
>>"Pete Delgado" <Peter.Delgado(a)NoSpam.com> wrote in message
>>news:eeYnLekzKHA.2644(a)TK2MSFTNGP04.phx.gbl...
>>>
>>>
>>> "Peter Olcott" <NoSpam(a)OCR4Screen.com> wrote in message
>>> news:GpGdnebGT-QCRDPWnZ2dnUVZ_h2dnZ2d(a)giganews.com...
>>>> Here is how you tell Windows not to ever swap this
>>>> memory
>>>> out.
>>>>
>>>> http://msdn.microsoft.com/en-us/library/aa366895(VS.85).aspx
>>>
>>> Weren't you the one who insisted that your application
>>> wasn't using *virtual memory*? So why would you use this
>>> API for your program that doesn't use virtual memory?
>>> <grin>
>>>
>>
>>I am the one that continues to insist that my application
>>experiences no page faults after all of its data is
>>loaded.
>>Joe continues to insist that I can not count on this
>>behavior. I think that Joe may be wrong, but, this is a
>>backup plan.
> ***
> I'm right. Run ten instances of your app. Or twenty.
> Essentially, you are predicating
> you success on a temporary piece of good luck. Not the
> way to build robust systems.
> joe

Glance at a couple or wrods and refute, refute, refute!
I an not going to repeat myself anymore on this.
You are wrong you know that you are wrong, and you are just
playing head games.

> ****
>>
>>> PS: I don't suppose you read the "remarks" section that
>>> explained about the maximum number of pages a process
>>> can
>>> lock?
>>
>>SetProcessWorkingSetSize
> ***
> And did you read the fine print in it? the part that says

Its all far far less convoluted in the OS that I will be
using.

>
> ============================
> Remarks
> ...
> If the values of either dwMinimumWorkingSetSize or
> dwMaximumWorkingSetSize are greater
> than the process' current working set sizes, the specified
> process must have the
> SE_INC_WORKING_SET_NAME privilege. All users generally
> have this privilege. For more
> information about security privileges, see Privileges.
>
> Windows Server 2003 and Windows XP/2000: The specified
> process must have the
> SE_INC_BASE_PRIORITY_NAME privilege. Users in the
> Administrators and Power Users groups
> generally have this privilege.
> The operating system allocates working set sizes on a
> first-come, first-served basis. For
> example, if an application successfully sets 40 megabytes
> as its minimum working set size
> on a 64-megabyte system, and a second application requests
> a 40-megabyte working set size,
> the operating system denies the second application's
> request.
>
> ****Using the SetProcessWorkingSetSize function to set an
> application's minimum and
> maximum working set sizes does not guarantee that the
> requested memory will be reserved,
> or that it will remain resident at all times. When the
> application is idle, or a
> low-memory situation causes a demand for memory, the
> operating system can reduce the
> application's working set.***** An application can use the
> VirtualLock function to lock
> ranges of the application's virtual address space in
> memory; however, that can potentially
> degrade the performance of the system.
>
> [**** emphasis added]
>
> When you increase the working set size of an application,
> you are taking away physical
> memory from the rest of the system. This can degrade the
> performance of other applications
> and the system as a whole. It can also lead to failures of
> operations that require
> physical memory to be present (for example, creating
> processes, threads, and kernel pool).
> Thus, you must use the SetProcessWorkingSetSize function
> carefully. You must always
> consider the performance of the whole system when you are
> designing an application.
> =============================================
>>
>>>
>>>
>>> -Pete
>>>
>>
> Joseph M. Newcomer [MVP]
> email: newcomer(a)flounder.com
> Web: http://www.flounder.com
> MVP Tips: http://www.flounder.com/mvp_tips.htm