From: Hector Santos on
David Ching wrote:

> You can also set a global unhandled exception handler in your app that
> will display the message box for all exceptions. This saves you from
> having to put try/catch in so many places, when all you do in the catch
> is show the exception's message. BTW, the exception also contains a
> nice callstack string, so you can dump the call stack in the message box
> as well! :-)

Yes, I know that much. At some point, I am going to write a generic
error logging/display class for recording/sending mechanism for
end-users. Right now I am just getting a feel for .NET learning all
the basic needs and I would probably be re-inventing this too when I
cross that bridge.

Today, for our server I ave RecordError class that allows Event
recording and/or basic file recording with sysop signaling, paging
options, etc.

>> The solution was simple, again, not knowing what are the possible
>> specific exceptions, I used catch all instead and checked for the
>> "LiveID" string in the exception message.
> To be fair, this same problem would have happened if error codes were
> returned instead of exceptions thrown. (I.e. you may not understand
> when and under what circumstances various error codes are returned vs.
> when and under what circumstances various exceptions are thrown.) It's
> the same thing.

Sure, like a LoadLibrary() with indirect loading. You an error for
your DLL, not necesarily knowing what dll it tried to load. DEPENDS
helps help.

>> The sad fact is this - its here. It is what it is, libraries are done
>> mostly one way now and developers have no choice but get use to it and
>> learn how to work with it.
> For me, exceptions offer a way of stepping back and recovering from
> "various errors" without tediously checking each step along the way.
> Theoretically they conserve brain power. But the downside is you give
> up explicit control that you intrinsically get when you are forced to
> check error codes with nested if. So you spend at least some of the
> saved brain power understanding and ensuring you cover the various
> situations, sometimes by trial and error, as you say. For me becoming a
> .NET programmer, the main question is, do you want to spend brain power
> making sure you work well with the framework, or do you want to spend
> brain power tediously coding each possible little thing in MFC. This is
> why I say .NET makes MFC look like assembly language, with the
> advantages and disadvantages of such.

Good way of putting - conserving brain power. I have to agree with
you there. For me, succumbing to .NET is not worrying about the
million DLL it depends on and loads. I was looking at Process Explorer
today just for wcLEX and comparing how nice my native apps are more
streamlined, but there again, the OS HOOKS are applied and thats
basically why I given in. Not worrying about the overhead.

PS: Glad you are no longer mad at me. :)

From: Joseph M. Newcomer on
See below...
On Wed, 23 Jun 2010 08:31:32 -0700, "David Ching" <dc(a)> wrote:

>"Goran" <goran.pusic(a)> wrote in message
>> Thing is also that .NET exceptions, like MFC (and VCL) ones, can
>> really be costly: they are on the heap, they are info-rich (e.g.
>> there's a CString inside) etc. But despite that, I still say, just
>> like you: there's no performance problems with exceptions; if there
>> is, programmer is doing something horribly wrong in 99% of cases ;-).
Note that for rich data objects, they are going to have to be deleted eventually, so the
performance is going to be constant; the point is that you would expect the exception to
simply trigger an earlier deletion.

Note that I assume we have been talking about the performance of the exception handler;
for me, that is the delta-T between the 'throw' and the 'catch'. And to be a valid
comparison, it has to be compared with the time involved in the if..return model; if I end
up doing a lot of cleanup of the heap when I get a FALSE back, then there is no penalty
for throwing the exception. But from the descriptions, it sounds like the throw-catch
time is what has been measured, and which matters, and since the call is shallow, it
cannot involve massive destructor activity. So that suggests that the exception mechanism
in .NET is really expensive compared to a simple mechanism such as in C++.
>> And absolutely, programmer does not know, by looking at any non-
>> trivial the code, where it's slow, too!
>Hi Goran, well I understand your point. My favorite string class is
>CString, and I use it by default. But I once had a situation on startup
>where the profiler showed a lot of time in copying CString, so we replaced a
>few lines of code to use lstrcpy() instead of CString, and that saved
>significant time. So I understand your point that exceptions in general are
>very usable, but it's only in certain situations where they cause a
>noticeable problem.
In the big-O performance characterizations, we talk about O(f(n) where f is a function
based on the number of elements n, e.g. O(n**2, O(log2 n), O(n log2 n), etc. But the
truth is that the REAL characterization is
k + C * f(n) + e

where k is the setup time, C is the constant of proportionality, and e is the teardown
time. In some cases, k or e can dominate. In other cases, C is the dominant factor. Your
replacing CString with lstrcpy changed the value of C.

Thus, an algorithm which os O(n**3) can be faster than an algorithm of O(n) if k+e are
high in the O(n) and C is small in the O(n**3). But two algorithms which are O(n) can
differ massively because of k+e and C. This is one of the uglier secrets of computational
>And perhaps .NET exceptions are more costly than C++ ones, I don't know. (I
>didn't use exceptions in C++, I only started using them because I had no
>choice when I went to .NET.) I will say it is a common in .NET programming
>to run in the debugger with first chance exceptions being enabled so you are
>well aware of any exceptions being thrown. And when I eliminate those, the
>program does seem to run a bit faster, but I don't know if that is my
>imagination or not. It does seem avoiding unnecessary exceptions is
>emphasized more in .NET, but I always thought that was because they are used
>more in .NET. Also perhaps because it involves loading more assemblies to
>deal with the exception, and loading assemblies is expensive.
Joseph M. Newcomer [MVP]
email: newcomer(a)
MVP Tips:
From: David Ching on
"Hector Santos" <sant9442(a)> wrote in message
> PS: Glad you are no longer mad at me. :)

The nice thing about being a developer and technical discussions is that
programming truth transcends all else! :-)

-- David

From: David Ching on
"Joseph M. Newcomer" <newcomer(a)> wrote in message
>>Why does it make a difference if it is .NET or C++? The concept is
>>the same.
> ****
> Performance. Throwing an exception in .NET involves a lot more effort
> because of the need
> to manage references.

Right, I thought that might be the case. You're probably right C++ is more
performant than .NET when throwing 'non-exceptional' exceptions; however, my
understanding is it is still frowned on in C++. But I have not used
exceptions much in C++ so don't know for sure.

> If I am parsing integers, I might need to parse millions of them; if I am
> parsing scripts,
> I might need to process less than 10 of them in a given execution. Scale
> matters.

That makes sense, thanks for explaining.

> Sadlly, it sounds like the mechanism was complete overkill for something
> as trivial as
> parsing a simple number, and probably didn't extend to parsing other
> interesting syntactic
> constructs (e..g, part numbers, social security numbers, etc.), so I
> question the utility
> of something that could not handle millions of integers efficiently. Even
> if there were
> errors.

Yes, that's why they added TryParse(). But I agree the decision may not
scale to other things. I still err on the side of using return values
instead of throwing exceptions for things like invalid input, though.

-- David

From: Joseph M. Newcomer on
Actually, I completely disagree with the fatal exceptions characterization.

You can attempt to salvage your program state, for exampe, by doing an AutoSave of the
document to save what is there, so the user, after a restart, finds virtually no work has
been lost. You can flush files, roll back transactions, and generally do something "user
friendly" to handle the error. Instead of having the top-level implicit handler say
Unhandled Exception
<user-unfriendly description here>
you can have
Fatal internal error
The program has run out of memory
It will now save all your existing work and restart
Please wait a few moments while this completes
If you see this problem continuing, please contact tech support
and report "Internal error 800765"

where that error code encodes something incredibly useful to help tech support (and
ultimately, the implementation team) figure out what should be done to cope with it.

Yes, no matter what, the program is shortly going to be put out of its misery (as the
author claims), but giving it a chance to write its Last Will and Testament, to make peace
with its Maker, etc. are not bad things to support. It may be going into hospice care but
it is not dead yet.

By the way, the code 800765 probably means "the program has been up too long, and memory
is internally fragmented. In the future, do not run the program more than six months at a
time," But, for example, if the error occurs frequently, it suggests that maybe we should
be looking into ways to reduce memory fragmentation. Key here is that the user-unfriendly
exception handler that is the default is not a good product strategy
On Wed, 23 Jun 2010 08:25:22 -0700, "David Ching" <dc(a)> wrote:

>"Giovanni Dicanio" <giovanniDOTdicanio(a)> wrote in message
>> "Vexing exceptions"
>Great article and illustrative of what we were trying to say!
>-- David
Joseph M. Newcomer [MVP]
email: newcomer(a)
MVP Tips: