From: Hector Santos on
First, a "silent WOW!"

Do you think this is related to the faults of the human brain?

Do you think this is related to the fact we have yet to evolve
carbon-based chips?

Yes, I do agree things are reinvented with new generations. You see it
all the time to your bewilderment, especially in my market. My
daughter last year said "Dad, I have a great idea. What if we
consolidate all our email accounts with a single tool!" I said, "Like
Outlook?" She said "Huh?" So I showed her and a little dumbfounded
she said "But its not the WEB!" Not to discourage her entrepreneurial
spirit and fire in her belly, I helped her outline a business
proposal! Go West!

Yes, I do agree the SEI was a disappointment of its earlier promises.
I was there following the proposal, acceptance, building built,
ceremonies and people recruitment. Westinghouse was a big part of it,
and one of our think tank people went there, Dr ALAN Something. An AI
guru, he might be among those you were complaining about that shift it
to a management enterprise. I entered the think tank shortly before he
left, so I didn't know him that well, but I did take over some of his
responsibilities which included a card blanc to explore all new
computer technology, machines and languages at the time. Gee, I
remember reading an article in some magazine about something called
"HypeText." So I implemented an example of a Criminal Database lookup,
using a demo of a picture caricature of my boss and the meeting room
applauded with amazement and laughter! I was KING! Simple minded
people, I recall. But we certainly underestimated the potential of
hypertext. We brought in that pittsburgh startup "Knowledge Base
Something", you probably remember them, they developed a hypertext
database system. I recall the complexity, the slowness and saying
"But it doesn't work on a PC!!!"

Yes, there were all kinds of faults, and things could of been better,
but I guess like most of us, we complain more than we take action.

Anyway, every with all our faults, you and most of us in the industry
did seem to have gotten pretty far. :)

--

Joseph M. Newcomer wrote:

> On Sat, 13 Feb 2010 23:23:52 -0500, Hector Santos <sant9442(a)nospam.gmail.com> wrote:
>
>> Man, reading you, one has to wonder why the world has blown up yet or
>> gotten this far. Everything was wrong, badly designed, hacked and no
>> one ever used it right or differently. Its all one way with you. Just
>> consider your inconsequential historical callback note had nothing to
>> do with the OP issue or question, nor contributed to the problem. I'm
>> sure until the code is posted, you would not exclude it as a
>> possibility. I say its mostly likely unrelated.
> ****
> What continues to amaze me is that ideas we knew were bad in 1970 keep getting re-invented
> by another generation who doesn't understand why they were abandoned. We worked for
> decades to improve the quality of programming, and here we are, in 2010, where the
> state-of-the-art is stuck essentially at C, and the C fanatics wonder why we say there are
> problems. There are problems because nobody actually pays attention to the past, looks at
> past successes or failures, but just start, from scratch, reinventing the same bad ideas
> over and over and over again. We just re-invented timesharing, which we realized by the
> early 70s was a Bad Idea. Now we call it "cloud computing". Duh. For all the same
> reasons that timesharing was bad in the 1970s, cloud computing is bad. So if I seem
> overly cynical, remember that this is not the FIRST time I've seen bad ideas re-invented;
> I've been around long enough to see most of them re-invented two or three times.
> Approximately a generation apart.
>
> Unfortunately, I'm not the kind of old codger who longs for the "good old days". The best
> part of the good old days is that they are in the past, and we have grown beyond them. And
> then someone comes along and tells me that the good old days were the best time, and the
> ideas we tried and abandoned are essential to good software. I am skeptical of this.
>
> Why aren't we programming in functional languages? Why do we even still have compilers
> that run as separate preprocessors to execution? (Seriously: trace-based compilation
> systems exist, and run, and are used every day, and here we sit with C and C++ and VB and
> C# compilers, stuck in the punched-card model that I had abandoned by 1969, forty years
> ago. C/C++ used to have a working edit-and-continue system until it was broken, and while
> C# and VB make such a system trivial, they never seem to have had it. Duh. We've gone
> backward since VS6, instead of forward.
>
> Callbacks were a bad idea; we knew better how to handle this in the early 1970s. Look at
> LISP closures, for example, and the large number of languages that managed to implement
> closures by the 1980s. Mutex-style synchronization was dead by the end of the 1980s, but
> not one of the good implementations of interthread synchronization made it to
> commonly-used languages. So we see deadlock problems. Callbacks, by the way, introduce
> problems in reasoning about program logic which makes reasoning about deadlock causes much
> harder, so my observations are *not* irrelevant to the OP. I've been doing multithreading
> since 1975 as a way of life, and was even doing multithreading back in 1968. And I
> learned that explicit locking is usually a mistake. It is a hack to solve a problem that
> should be solved by architecture, and the low-level lock we are familiar with, although it
> needs to exist, should be completely invisible to the programmer (example: putting
> elements in queues requires a lock at the lowest level, but I should never see that or
> have to reason about it). People who worried about these issues have won Turing awards
> (the computer profession's equivalent of the Nobel Prize) yet not a single one of their
> ideas exists in our programming languages. The "synchronize" capabilities of Java and C#
> are deeply flawed (a friend of mine just got his PhD for building a program that finds
> synchronization errors in Java, and his comment is, "Everyone gets threading wrong all the
> time. Start with that as your premise" and has the experience of examining, with his
> program, over half a million lines of Java written by some of the best Java multithreading
> experts to demonstrate that even they made serious errors).
>
> So yes, we are, for all practical purposes, programming largely using 1970 technology,
> except when we are using 1960 technology. In the 1980s, I was one of the founding
> scientists of the Software Engineering Institute, and was examining the best-of-the-best
> technology so we could figure out how to get it into the mainstream. The mainstream
> didn't want it; they were content with 1960s technology because they were comfortable with
> it. Learning new stuff, and better ways to do things, was not an acceptable agenda. I
> left the SEI when I realized that (a) it had nothing to do with the actual *Engineering*
> of software, but was concerned with the *management* of the process and (b) industry
> didn't want to change what it was doing for any reason, no matter how cost-effective it
> might be in the long run.
>
> I really don't want to live in the past, and when I complain that yet again we are
> replicating the technology of the 1950s and 1960s, somebody comes along to explain that it
> is necessary we do so because there aren't better ways. There have *always* been better
> ways.
>
> Consider: JavaScript, the darling of AJAX, is just Simula-67 done badly. The heart of
> AJAX, XML, was done better in both 1977 (in a project I was responsible for) and 1981 (in
> a PhD dissertation done by a friend, an outgrowth of refining the problems we discovered
> in the existing 1977 implementation). DTDs were designed by people who had no experience
> designing languages or grammars (just ask any professional language designer. We still
> have quite a few around, including a friend of mine who designed the Ada competitor).
> Those of us in our 60s, who were there at the leading edges in the 1970s through 1990s,
> look around and see that there has been very little progress made in forty years. What we
> lament is not that the good old days are gone, but they are, alas, still with us.
> joe
> ****
>> Anyway, it would interesting to hear your critic on the design faults
>> of the human brain! :)
>>
>> --
>>
>> Joseph M. Newcomer wrote:
>>
>>> See below...
>>> On Sat, 13 Feb 2010 18:44:35 -0500, Hector Santos <sant9442(a)nospam.gmail.com> wrote:
>>>
>>>> Joseph M. Newcomer wrote:
>>>>
>>>>> Again, you are dragging IPC into a discussion where IPC was not involved. You can do
>>>>> callbacks such as event sinks without exposing the ugliness of the raw callback mechanism.
>>>> Ok, lets try to get this debate civilly. I don't care for history so
>>>> please try to refrain from personal opinions there, i.e. callback is a
>>>> C hack.
>>> ****
>>> Actually, a callback is an assembly code hack, translated into C. This is not an opinion,
>>> it is a statement of fact. Callbacks at this level exist because the languages were
>>> unable to provide a suitable abstraction, so instead of some clean mechanism, the old
>>> let's-use-a-pointer technique was recast from assembler to C. It is a hack in that it is
>>> simply a translation of a machine-level concept directly into a high-level language.
>>>
>>> OTOH, the notion of virtual methods as a means of invoking operations is consistent with
>>> the linguistic design of C++, and although it is done *exactly* by using a pointer to
>>> redirect the call, it is done in a framework that is semantically consistent with a
>>> high-level abstraction.
>>> ****
>>>> What do you consider is a raw callback mechanism?
>>> *****
>>> call [eax]
>>>
>>> substitute other register names, it is isomorphic to renaming. Expressed in C, it is
>>> implemented by passing in a function pointer as a parameter to a call, with the purpose
>>> that the specified function is called when there is a desire to invoke some operation. It
>>> is limited to a single function in general.
>>> *****
>>>> Now, in the previous message, you stated that if it didn't provide a
>>>> user-define value, then YES, I will agree that a callback mechanism
>>>> that doesn't take into account:
>>>>
>>>> 1) Rentrancy
>>>> 2) Provide for user-defined object/data access,
>>>>
>>>> then yes, it is a poor implementation. I agree, and when dealing with
>>>> 3rd party software with callback logic lacking the above, especially
>>>> #2, then absolutely, its problematic.
>>> ****
>>> Sadly, most 3rd party software fails in both the above. The Windows API has far too many
>>> callback-style APIs that fail in the same way, including, inexcusably, some that were
>>> added in either 2000 or XP, when the issues were well-known, but totally ignored.
>>> ****
>>>> But that is an implementation issue, and not a language or "C Hack"
>>>> issue. You can say that the C++ layer provide a cleaner interface,
>>>> and I agree, but I will also note that these are generally based on a
>>>> lower level callback abstraction. In fact, I seem to recall dealing
>>>> with 3rd party software where it lacked a user-defined value and using
>>>> a C++ wrapper help resolved that by making the callback static in the
>>>> class, and combining it with TLS. I forget that project, but I do
>>>> recall going through those motions.
>>> *****
>>> The issue is in confusing an interface that works directly and "in-your-face" with raw
>>> function pointers, and one which uses the syntactic features of the language (like virtual
>>> methods, a first-class concept) to disguise the implementation details and provide for a
>>> degree of cleanliness and elegance. For example, compare try/catch/throw or _try/_except
>>> to setjmp/longjmp as mechanisms. All are stack unwinders. But setjmp/longjmp is an
>>> inelegant kludge compared to _try/_except, and neither will work in C++ because of the
>>> need to invoke destructors as the stack unwinds.
>>>
>>> Note that in what is now my 47th year as a programmer, I have used callbacks under all
>>> kinds of conditions; I have grown to despise it as a way of life, particularly because it
>>> is so easily misused and abused. I have written stack unwinders, I have implemented
>>> try/catch mechanisms, I have implemented event notification systems, interrupt handlers,
>>> and pretty much everything that is possible to do with pointers-to-functions. And I
>>> prefer to use the C++ virtual function model. Note that the message map is actually a
>>> poor kludge; in a sane world, they would have been virtual methods, but in 16-bit windows
>>> the vtables would have gotten too large and the message map was invented as a more compact
>>> representation of the abstraction. It might have even worked well if they had gotten just
>>> a few more details right, such as always accepting the parameters passed to the superclass
>>> (this is a failure of design and implementation of a callback mechanism!). So I've seen
>>> all possible mistakes, and even made quite a few of them, particularly in my first ten
>>> years or so of programming.
>>>
>>> So I recognize the difference between a low-level hack and a methodology that is part of a
>>> linguistically consistent framework.
>>>
>>> I was shocked when Ada did not allow the passing of function pointers, wondering how it
>>> was possible to specify callbacks. Then I realized the language had alternative
>>> mechanisms that eliminated most of the need for user-specified callback functions
>>> (internally, the generated code performed callbacks, but you didn't have to see that).
>>> This was in the late 1970s, and since then I have realized that the raw callback is just a
>>> continuing hack to get an effect that should be achievable in other ways. The C++ virtual
>>> method (or C#, or Java virtual method) is one of these mechanisms. And there are those
>>> who will argue that even virtual methods are a hack, and that embedding, using interfaces,
>>> is the only way to go (e.g., COM, and the new Google GO language, which I have seen only
>>> little bits of). Ultimately, we want to get away from the decades-old concepts (like the
>>> computed GOTO) that were done to allow high-level programmers create constructs that
>>> compiled into efficient code at the low level, and go for clean abstractions (which most
>>> decent compilers can compile into incredibly good code at the low level, with no effort on
>>> the part of the programmer. I used to say that in a good language with a good compiler,
>>> you can write six levels of abstraction that compile into half an instruction. I've
>>> worked with good languages and good compilers, and have done this, repeatedly).
>>>
>>> It's too easy to get captivated by the implementation and forget that the implementation
>>> details are best left to automated mechanisms. Compilers are really good at this sort of
>>> grubby detail. At the lowest level, the implementation might be the instruction
>>> call [eax]
>>> but you should never have to think of this at the coding level. The construct
>>> function(args)
>>> where 'function' is actually a pointer is too close to the call [eax] to be really
>>> comfortable.
>>>
>>> Fortunately, the tools we have for MFC eliminate many of the visible details of how
>>> message maps are actually dispatched. At the lowest level, it really is
>>> call [eax]
>>> (in fact, if you single-step through the AfxWndProc assembly code far enough, this is what
>>> you will see, isomorphic to renaming of the register). But as an MFC programming, I have
>>> a very high-level concept: add an event handler. I don't need to see the details of the
>>> implementation. It just works. Well, sort-of-works, but I've already described the
>>> problems there. Virtual methods, if you accept derivation as the way of creating new
>>> classes, do the same job.
>>> ****
>>>> So it depends on what kind of design you are referring too. Writing
>>>> an C++ implementation is excellent and most of our stuff is written in
>>>> this way using interfaces. But I guess I have to wonder why we use
>>>> C++ in general. I think it because of its
>>>>
>>>> - natural scoping capabilities,
>>>> - constructors and destructors,
>>>> - polymorphisms and interface.
>>>>
>>>> The virtual interface, a large part, but not the only part.
>>>>
>>>> Keep in mind you can duplicate all the above within pure C if you
>>>> provide the library code to do so.
>>> ****
>>> And you can write in assembler, too, but it doesn't mean it is a good thing most of the
>>> time.
>>>
>>> I'm working on a course in assembly code, because there are still people who need to work
>>> with it (yes, I was surprised, but the uses are legitimate). One of the surprises was the
>>> number of people who need to write really tight, high-performance SIMD code (apparently
>>> the x64 intrinsics don't produce particularly good code when they are used for this). But
>>> it doesn't mean that people should write apps in assembler.
>>>
>>> If my customer base accepted it, I'd be writing in C# or WPF, but they don't want this. In
>>> fact, are opposed to it (I'm not sure I follow the reasoning, but they write the checks,
>>> and I want to take their money). So I write in C++, and in a few cases in C (and I just
>>> finished a library in C, several thousand lines of code, in which callbacks form a
>>> particularly important part of the functionality. But I'd rather have done it in C++ and
>>> just supplied a pure virtual method. You would not BELIEVE what I saw done there when I
>>> got the library; it was a particularly ugly callback, well, I can't really call it
>>> "design", and "kludge" gives it too much dignity, but as redesigned, it is essentially a
>>> virtual method mechanism and therefore intellectually manageable, as well as handling a
>>> ton of problems the old mechanism simply ignored). So I still use them, but they *should*
>>> be avoided as a way of life in most coding. I nearly made all the complexity of the
>>> earlier kludge disappear in the new design, which took major rework to get complete and
>>> consistent. Doing a callback *right* isn't easy, and most people, I have found, take the
>>> easy solution. If you assume that you should avoid them, then you use them only when
>>> necessary, and ideally wrap enough syntactic sugar around them to make them go down easily
>>> (e.g., virtual methods in C++).
>>> joe
>>> *****
>>> Joseph M. Newcomer [MVP]
>>> email: newcomer(a)flounder.com
>>> Web: http://www.flounder.com
>>> MVP Tips: http://www.flounder.com/mvp_tips.htm
> Joseph M. Newcomer [MVP]
> email: newcomer(a)flounder.com
> Web: http://www.flounder.com
> MVP Tips: http://www.flounder.com/mvp_tips.htm



--
HLS
From: Hector Santos on
On Feb 14, 11:06 pm, Joseph M. Newcomer <newco...(a)flounder.com> wrote:
> >Do you think this is related to the fact we have yet to evolve
> >carbon-based chips?
>
> ****
> There is a mythos about how fast the brain works. The brain is actually amazingly slow.

Compared to what?

> But a "chip"
> based on neural connections probably wouldn't work unless it weighed about 8 pounds and
> had the 3 billion or so neurons that make up the human brain.

Got to start some where as did the early transistors.

> But Web-based email readers have been around for ten years...

Actually 1996 for us - Wildcat! Internet Net Server. 1996 Var
Business "Editor's Choice", PC WEEK, PC Computing "MVP, Best of the
Net" and InfoWorld.


> >Yes, I do agree the SEI was a disappointment of its earlier promises.
> > I was there following the proposal, acceptance, building built,
> >ceremonies and people recruitment. Westinghouse was a big part of it,
> >and one of our think tank people went there, Dr ALAN Something. An AI
> >guru, he might be among those you were complaining about that shift it
> >to a management enterprise.
>
> ****
> No, it was Alan Newell, and he never believed in the shift to management.

Last name doesn't ring a bell, but yes, Alan was more of a geek. If
he was the only "Alan" and came from Westinghouse, then that was him.
hmmm, he could of had a sabbatical with Westinghouse to help get the
then new AI/Venture group started. I'm sure it was him.

> I know this,
> because I knew him. But while he was part of the formation of the SEI, the incompetent
> dweeb who took over ran it his own way, using cronyism as his hiring criterion, hiring
> some of the singularly worst people in the known universe, who could spin a good story but
> were really short on serious technical skills (this guy had an ego so huge that he
> requested that the new SEI building have a helipad on top so he could be brought to work
> every morning by helicopter, at the Air Force's expense. It was even in the plans for the
> new building!

I understand the feeling. There was much of that going on in the AI/
Venture Group as well. I'm sure Circle W had a lot to say in SEI
direction since Defense was a big part of its funding and major part
of what we were doing. You have to remember you "Academic Guys" were
there for us. I was not too happy with the direction, missing all
sorts of opportunities. But if a new idea didn't bring in 100M annual,
it wasn't worth it. Of of the things the group "invented" was
Offshore Software Engineering using Indians. The basic idea was to
use american SE and have indians code it. SEI was to be part of the
plan, proposed to the World Bank to get the funding to help develop
3rd world countries, we invited Microsoft and Gates turned it down
(only to begin hiring green cards a year later). Idea shut down, one
of the Indian management cronies went on to develop the idea in our
Monroeville Mall offices and I remember seeing offices full of indians
when we working Reagans "Star Wars" projects - writing on the wall for
US programmers. I am still shamed by all that. One project I was in
charge of was preparing a online BBS for the corporation to help share
knowledge with all division. I helped the PCBOARD people with the
first BBS to run over X.25/PADS. To print your morning messages, I
helped co-write the "Emailator" one of the first offline mail reading
systems based on TapCIS. Another project was Optical Scanning of
military/legal documents with OCI capabilities. Didn't get the
"large" contracts, the project was killed. Three of us asked
permission to take it on our own. Getting corp permission, we quit
and started OptiSoft to build the OptiFile PC based system using the
advanced scanning/imaging board. It failed 1 year later, too long
sales time and too many competitors, I decided to go pure software (no
hardware) started Santronics Software, Inc to concentrated in offline
mail systems (OMS) for the blossoming telecomputing business. OMS
declined by 1990/93 as the cost of being online decreased. By 96, I
purchased the then #1 online hosting software/system in the world.
Ironically, OMS is making a comeback as AT&T and its offsprings are
once again charging a premium for online data access, especially for
mobile.

> ****
> It was KMS, Knowledge Management Systems,

That was it. I wasn't too impress. :)

> KMS was formed to build the hypertext system that was used in the USS Carl Vinson, one of
> the modern nuclear aircraft carriers.

Correct, Defense and Expert Systems was a large part of the working we
were doing.

> (To take an aircraft carrier to sea from ordinary
> docking took three days of prep, thousands and thousands of steps that had been in massive
> printed books).

It was more than that. The concern was the dieing breed of "expert'"
engineers, old farts that knew everything about the ships, subs,
planes, elevators, etc, dieing off (retiring) and the US would need
"Expert Systems" built by KB engineers who knew how to ask the right
questions and extract and "computertize" all the knowledge. One of
the goals was more diagnostic in nature using fuzzy logic code.

> The New Kids
> On The Block were Apollo Computer and Sun, both of whom used 68000-based workstations,
> horrendously overpriced.

Yup!

> A PC, up until Windows 3.0, didn't have the horsepower or architecture to run KMS well.
> X-Windows wasn't around, either.

There was a push to get a OS/2 version developed since that was the
primary "direction" and MS/IBM were in co-hoots. I got all the OS/2
compilers with draft docs! :)

This is prime example of where "LESS" was better when the OS/2 killer
- Windows 3.1 and VB was introduced :) Oh, do I remember how staffs
of 50 were cut down to 20! :)

From: Joseph M. Newcomer on
See below...
On Mon, 15 Feb 2010 02:39:05 -0800 (PST), Hector Santos <sant9442(a)gmail.com> wrote:

>On Feb 14, 11:06 pm, Joseph M. Newcomer <newco...(a)flounder.com> wrote:
>> >Do you think this is related to the fact we have yet to evolve
>> >carbon-based chips?
>>
>> ****
>> There is a mythos about how fast the brain works. The brain is actually amazingly slow.
>
>Compared to what?
****
Oh, say, a 4-function calculator. Go read Lindsay & Norman "Human Information
Processing", the introductory book to cognitive psychology.
****
>
>> But a "chip"
>> based on neural connections probably wouldn't work unless it weighed about 8 pounds and
>> had the 3 billion or so neurons that make up the human brain.
>
>Got to start some where as did the early transistors.
****
The secret is not in the technology, but the interconnects
****
>
>> But Web-based email readers have been around for ten years...
>
>Actually 1996 for us - Wildcat! Internet Net Server. 1996 Var
>Business "Editor's Choice", PC WEEK, PC Computing "MVP, Best of the
>Net" and InfoWorld.
>
>
>> >Yes, I do agree the SEI was a disappointment of its earlier promises.
>> > I was there following the proposal, acceptance, building built,
>> >ceremonies and people recruitment. Westinghouse was a big part of it,
>> >and one of our think tank people went there, Dr ALAN Something. An AI
>> >guru, he might be among those you were complaining about that shift it
>> >to a management enterprise.
>>
>> ****
>> No, it was Alan Newell, and he never believed in the shift to management.
>
>Last name doesn't ring a bell, but yes, Alan was more of a geek. If
>he was the only "Alan" and came from Westinghouse, then that was him.
>hmmm, he could of had a sabbatical with Westinghouse to help get the
>then new AI/Venture group started. I'm sure it was him.
****
Al Newell was an AI guru, but as far as I know he had no deep relationship with
Westinghouse. He was a University Professor, a rank that is equivalent to being a
one-person academic department. We have a very small number of them, I think six or eight
in the whole University.
****
>
>> I know this,
>> because I knew him. But while he was part of the formation of the SEI, the incompetent
>> dweeb who took over ran it his own way, using cronyism as his hiring criterion, hiring
>> some of the singularly worst people in the known universe, who could spin a good story but
>> were really short on serious technical skills (this guy had an ego so huge that he
>> requested that the new SEI building have a helipad on top so he could be brought to work
>> every morning by helicopter, at the Air Force's expense. It was even in the plans for the
>> new building!
>
>I understand the feeling. There was much of that going on in the AI/
>Venture Group as well. I'm sure Circle W had a lot to say in SEI
>direction since Defense was a big part of its funding and major part
>of what we were doing. You have to remember you "Academic Guys" were
>there for us. I was not too happy with the direction, missing all
>sorts of opportunities. But if a new idea didn't bring in 100M annual,
>it wasn't worth it. Of of the things the group "invented" was
>Offshore Software Engineering using Indians. The basic idea was to
>use american SE and have indians code it. SEI was to be part of the
>plan, proposed to the World Bank to get the funding to help develop
>3rd world countries, we invited Microsoft and Gates turned it down
>(only to begin hiring green cards a year later). Idea shut down, one
>of the Indian management cronies went on to develop the idea in our
>Monroeville Mall offices and I remember seeing offices full of indians
>when we working Reagans "Star Wars" projects - writing on the wall for
>US programmers. I am still shamed by all that. One project I was in
>charge of was preparing a online BBS for the corporation to help share
>knowledge with all division. I helped the PCBOARD people with the
>first BBS to run over X.25/PADS. To print your morning messages, I
>helped co-write the "Emailator" one of the first offline mail reading
>systems based on TapCIS. Another project was Optical Scanning of
>military/legal documents with OCI capabilities. Didn't get the
>"large" contracts, the project was killed. Three of us asked
>permission to take it on our own. Getting corp permission, we quit
>and started OptiSoft to build the OptiFile PC based system using the
>advanced scanning/imaging board. It failed 1 year later, too long
>sales time and too many competitors, I decided to go pure software (no
>hardware) started Santronics Software, Inc to concentrated in offline
>mail systems (OMS) for the blossoming telecomputing business. OMS
>declined by 1990/93 as the cost of being online decreased. By 96, I
>purchased the then #1 online hosting software/system in the world.
>Ironically, OMS is making a comeback as AT&T and its offsprings are
>once again charging a premium for online data access, especially for
>mobile.
>
>> ****
>> It was KMS, Knowledge Management Systems,
>
>That was it. I wasn't too impress. :)
>
>> KMS was formed to build the hypertext system that was used in the USS Carl Vinson, one of
>> the modern nuclear aircraft carriers.
>
>Correct, Defense and Expert Systems was a large part of the working we
>were doing.
>
>> (To take an aircraft carrier to sea from ordinary
>> docking took three days of prep, thousands and thousands of steps that had been in massive
>> printed books).
>
>It was more than that. The concern was the dieing breed of "expert'"
>engineers, old farts that knew everything about the ships, subs,
>planes, elevators, etc, dieing off (retiring) and the US would need
>"Expert Systems" built by KB engineers who knew how to ask the right
>questions and extract and "computertize" all the knowledge. One of
>the goals was more diagnostic in nature using fuzzy logic code.
>
>> The New Kids
>> On The Block were Apollo Computer and Sun, both of whom used 68000-based workstations,
>> horrendously overpriced.
>
>Yup!
>
>> A PC, up until Windows 3.0, didn't have the horsepower or architecture to run KMS well.
>> X-Windows wasn't around, either.
>
>There was a push to get a OS/2 version developed since that was the
>primary "direction" and MS/IBM were in co-hoots. I got all the OS/2
>compilers with draft docs! :)
>
>This is prime example of where "LESS" was better when the OS/2 killer
>- Windows 3.1 and VB was introduced :) Oh, do I remember how staffs
>of 50 were cut down to 20! :)
****
In 1991, right before Windows NT was released, I gave a "breakfast talk" to a roomful of
mainframe managers on the future of computing. Pretty much everything I predicted came
true, and in fact by 2000 most of my predictions, radical for 1991, had become
underestimations. But I said "There are three operating systems which will be interesting
to watch. The first is Unix, The Operating System Of The Future. We know it will be the
operating system of the future because we've been told that for the last 20 years. Then
there's OS/2, with its graphical interface. And there's Windows NT. Where would I put my
money? Well, Unix keeps promising to take over the desktop, but its incoherent,
fragmented, and proprietary-enhancement-variant nature works against that goal. So you
will always be stuck with one hardware vendor. There's OS/2, a system developed by the
company that created the personal computer revolution then ran away from it, an operating
system doomed because they refuse to upgrade it beyond the '286 chip, and for which they
charge $3,000 per seat for the development kit. And there's Windows NT, created by a
company that understood the personal computer revolution, understands where it is going,
and has corporate plans to be there wherever it goes. Windows NT was designed to be
portable, runs on several different platforms already, and has a development kit that is
for all practical purposes free. I bet on Windows NT. You should, too."
joe
****
Joseph M. Newcomer [MVP]
email: newcomer(a)flounder.com
Web: http://www.flounder.com
MVP Tips: http://www.flounder.com/mvp_tips.htm
From: Hector Santos on
Joseph M. Newcomer wrote:
>

>>> There is a mythos about how fast the brain works. The brain

>>> is actually amazingly slow.
>>

>> Compared to what?

>

> ****
> Oh, say, a 4-function calculator. Go read Lindsay & Norman "Human Information
> Processing", the introductory book to cognitive psychology.
> ****


hmmm, you mean the calculator (any computer) is faster at reaching a
1x0 conclusion than a human?

Well, not really the same analogy, is it? Your calculator or Cray
isn't going to do much good at intelligence and putting together
unrelated consequences. Now I'm thinking Query Dissemination Theory,
i.e, where you no longer calculating but *zooming* to a learned
solution. i.e. like entering 1x0 into your 4-func calculator once and
never have to do it again! In that vain, the calculator is a stupid
device and slower from typing and looking at the LED waiting for an
answer at getting to the answer 0 when you can do with no hands and
your eyes close, "almost" no thinking involved. Now, if you wish to
begin to emulate this behavior in the calculator, then give it a short
circuit for zero and other known conditions that will eliminate flip
flopping bits and not really do any "processing" at all. :)

--
HLS
From: Keaven Pineau on
After reading your suggestions, I have changed the way I communicate between
the worker thread and the UI. I used message via PostMessage() which solved
almost all my issues. The only thing left was to pay attention and not
calling SuspendThread() several times without calling ResumeThread() each
time because the counter will not be at zero when I will try to stop the
thread and therefore causing a deadlock on my thread handle
WaitForSingleObject() .

Despite the fact that I found out your way to answer a bit harsh , I will
thank you to point me in a good direction to solve my problem.

Keaven

"Keaven Pineau" <keavenpineau-no-more-spam(a)videotron.ca-no-more-spam> wrote
in message news:e2v2KMCrKHA.4492(a)TK2MSFTNGP05.phx.gbl...
> Hello all,
> I did a dialog application with an utility class with 2 working threads in
> it that are calling callback functions of the xxxdlg class.
>
> Thead A is my main working thread. This thread his waiting on 2 events :
> 1- Quit Event
> 2- Optional callback call Event
>
> This thread is calling a callback function on every
> WaitForMultipleObjects() timeout, here 5000 ms.
>
> Thread B is an optional thread that can be enable/disable at anytime.
> This thread his waiting only a quit Event and when WaitForSingleObject()
> timeout it is setting the Optional Event of Thread A via SetEvent().
> Timeout here is 15 000 ms.
>
> Each Thread are calling AfxEndThread(0,FALSE); at the end and the control
> function is waiting on A->m_hThread and/or B->m_hThread before deleting
> their respective object.
>
> Now, if I am not enabling thread B. I can start and end Thread A without
> any issue. If I start both thread A and B and I can also quit them
> without problem if they were both running. Now , If I start both thread A
> and B and stopping thread B and waiting a 10 seconds when I will try to
> stop thread A the WaitForSingleObject() on his handle will deadlock.
>
> I have found out that it is related with the event I am using for telling
> thread A to execute the optional callback. If I simply put the SetEvent()
> in comment, the problem never occurs.
>
> Any idea, why this is happening?
>
> Thank you

First  |  Prev  |  Next  |  Last
Pages: 1 2 3 4 5
Prev: "Problems"
Next: How well can TextOut() handle Unicode?