From: Goran on
On Jun 23, 10:37 am, Giovanni Dicanio
<giovanniDOTdica...(a)REMOVEMEgmail.com> wrote:
> Sure it isn't (ain't?) rocket science... but do you like a fopen that
> throws an exception if the file cannot be opened? No, I prefer one
> returning an error code.

Several major frameworks use throwing file open functions and don't
seem to suffer (I know, I argument by number, which is a logical
fallacy ;-))

> The fact that you can't open a file is not an exceptional condition, and
> I prefer code like:
>
>    if ( some_open_file_api(...) == error )
>    {
>       ... do what you want... (e.g. create the file, or other stuff....)
>    }
>    ... normal flow
>
> instead of try/catch.

The thing is, IMO: exceptions are not "for exceptional conditions".

They are a way to structure code in face of conditions that normally
cause it to stop whatever it started doing©. (Normally, these
conditions are errors, but not necessarily.) They allow us to
__structure code__ in a more clear way.

How is that achieved? Simply by eliminating (out of sight, but not out
of existence, really) a myriad of error paths that do no good except
produce screen garbage. So instead of seeing endless if statements,
you see what code does when it works (and we write the code so that it
works, not to err, don't we?). When you need to see "error" paths, you
look at first enclosing try/catch. When you need to see cleanup code,
you look at destructors of stack objects.

So... IMO, "exceptionality" of the situation matters very little. It's
what happens in WRT code __structure__ that's important.

About your file open operation: typically, you work with file like
this:

open
read/write/rinse/repeat //a lot of code here

Now... If opening a file fails, "a lot of code here" is dead in the
water. So you have a choice of writing an if to stop said code from
running (if open has error result), or doing nothing to stop said code
(if open throws). And if it throws, you are certain that you'll never
forget said "if", because it was done for you.

Now, I suggest that you look at your own code and say honestly, when
you open a file and that fails, do you continue? I am pretty much
certain that in majority of cases you don't continue. So why would you
like these "ifs"?

Goran.

P.S. I mildly like MFC way of having a choice (throwing CFile ctor and
non-throwing ctor + BOOL Open).
From: Goran on
On Jun 23, 6:02 am, Joseph M. Newcomer <newco...(a)flounder.com> wrote:
> Again, the question is performance: if you had to open a file, read data in, and finally
> invoke the parsing, what percentage of the time is spent handling an exception.

Joe, it's true, people have been complaining about said .NET Parse
having considerable impact on performance in at least one place on the
internet, and they were right. They actually went on through the
measurements on their use case (and even wrote their own non-throwing
Parse).

Problem was that they were batch-processing some files, and some of
them had a lot of ill-formatted numbers in them. So they could measure
that use of Parse made code run two times slower (on their use-case,
and clearly, depending on the percentage of these bad numbers). And
note: they were working with files! In fact, IIRC, initially they saw
that "bad" files took more time to process than "good" files - that's
what ticked it all.

Of course, question is why did they have all those bad numbers in the
first place, but hey...

Thing is also that .NET exceptions, like MFC (and VCL) ones, can
really be costly: they are on the heap, they are info-rich (e.g.
there's a CString inside) etc. But despite that, I still say, just
like you: there's no performance problems with exceptions; if there
is, programmer is doing something horribly wrong in 99% of cases ;-).

And absolutely, programmer does not know, by looking at any non-
trivial the code, where it's slow, too!

Goran.
From: RB on
Hey thanks for replying, this thread picked up while your timezone
was sleeping. I will respond to this one first and then any other
replies you sent to me if not material already covered here.

> As I said: you are not permitted to write try/catch statements ;-)

Yes I do remember you saying that but at the time I did not really
understand what you meant by it other than the fact that most of the
time it would end up being counterproductive or something to that
effect. I get the most from your replies after I experiment thru a
few foobar attempts and then go back and read them again.

I also remember you saying,

> Find the answer to that, and you'll find that you don't need a try/catch

And I have found the answer to that.

But on your code example,

> try
> {
> workworkwork();
> }
> catch(CException* p)
> {
> DEL_ON_EXIT(p);
> p->ReportError();
> }

I see what calling ReportError does. When I tried it in my code
it just called reported the error but did no cleanup etc, so obviously
you were obviously explaining " in relation to " here.
However I would like to express these couple of items here
since I am learning LOADS from all of this.
First off exactly what is the define DEL_ON_EXIT ? My VC6
compiler doesn't reconize it so I need the include or whatever ?
Second when I ran this,

catch (CException* e)
{
// do Whatever
e->ReportError( ); runs reports and returns
throw; // pass it on framework handlers
// The throw gives me all of what ReportError did and more,
// including a DELETE_EXCEPTION(e);
// so I surmize this is some of what you meant when you said
// I should not be using try and catch .
}

> In this case (wrong "magic" in the file), you have these options
> (AFAICanSee):
> 1. use AfxThrowArchiveException with e.g. badIndex (kinda silly name
> given the associated message, but fair enough).

Yes I had stumbled along until I came up with this,
// in my doc serial load loop
ar >> FID_Read;;
if (FID_Read != FileID)
{ // FileID mismatch
AfxThrowArchiveException(CArchiveException::badIndex, NULL );
}
......

This thread took off with so many opinions on try and catch that I lost
track at first. But in summation I learned the syntax I was after and LOADS
of concept. I feel a more confident level dummy 1.0 now
From: Goran on
On Jun 23, 2:54 pm, "RB" <NoMail(a)NoSpam> wrote:
> Hey thanks for replying, this thread picked up while your timezone
> was sleeping.  I will respond to this one first and then any other
> replies you sent to me if not material already covered here.
>
> > As I said: you are not permitted to write try/catch statements ;-)
>
>   Yes I do remember you saying that but at the time I did not really
> understand what you meant by it other than the fact that most of the
> time it would end up being counterproductive or something to that
> effect.  I get the most from your replies after I experiment thru a
> few foobar attempts and then go back and read them again.
>
> I also remember you saying,
>
> > Find the answer to that, and you'll find that you don't need a try/catch
>
> And I have found the answer to that.
>
> But on your code example,
>
> > try
> >  {
> >     workworkwork();
> >  }
> > catch(CException* p)
> >  {
> >     DEL_ON_EXIT(p);
> >    p->ReportError();
> >  }
>
>   I see what calling ReportError does. When I tried it in my code
> it just called reported the error but did no cleanup etc, so obviously
> you were obviously explaining " in relation to " here.
>    However I would like to express these couple of items here
> since I am learning LOADS from all of this.
>   First off exactly what is the define DEL_ON_EXIT ? My VC6
> compiler doesn't reconize it so I need the include or whatever ?

Ugh. Can't you get a more recent compiler? VC6 - ugh!

About DEL_ON_EXIT: I made it for myself a long time ago, when I
decided that I don't want to meddle with MFC exception macros anymore.
Here it is:

class CMFCExceptionDelete // make it noncopyable
{
public:
CMFCExceptionDelete(CException* pe) : m_pe(pe) {} // should be
explicit...
~CMFCExceptionDelete() { m_pe->Delete(); }
private:
CException* m_pe;
void operator=(const CMFCExceptionDelete&) {}
CMFCExceptionDelete(const CMFCExceptionDelete&) {}
};
#define DEL_ON_EXIT(e) CMFCExceptionDelete Delete_e_OnExit(e);

How to use:
....
catch(CException* pe)
{
DEL_ON_EXIT(pe); // Always a first thing to do in a catch!
// do anything you like, including throwSomeOtherException
// don't do throw; or throw pe;
}

That ensures that pe is alive inside catch and that it's pe->Delete()-
d at block exit.

Drawback: you can't hold on to pe after the block exit. If you need
that, find something else ;-).

Goran.
From: Hector Santos on
Today Joe, IMV, its a more a matter of documentation. The overhead is
somewhat less important in this "Bulky Code" world, attempting to
optimize for size or speed is of less important or negligible in a
already heavy handed OS load environment. Developing applications in
a p-code environments are par for the course today.

I think part of the problem for developers is a separation of whats
critical vs whats natural and/or expected.

Often the complexity (possibly due to lack or complex documentation)
promotes using a single catch all (more below).

When the components are "better understood" sometimes being explicit
in exception trapping is useful, other times its not.

Another problem is that constructors do not lend itself for functional
programming and exceptions are required, if necessary for the class logic.

And of course, as the generation continues, library designers are more
oops and event programming oriented and thus use less functional
programming (FP) techniques and sometimes too much oops orientations
for the simplest of constructs. As you know, in the early days, good
library designers (by necessity for the better brand) use to provide
frameworks for different types of OOPS vs FP programming audience - an
expensive upkeep in the long run - something has to give, and the
trend is and has been OOPs, FP mentions gets less attention. But then
again, the irony is that you see the same people going back to FP
technique or adding it, ALA F#.

A good example is my recent experiences with my real first .NET
project, wcLEX (Wildcat! Live Exchange). I am using this project to
(re)learn all the .NET particulars and "How To's" for the more
expensive product migration move coming.

Not knowing what are all the possible "error" conditions for the .NET
library, I got into a practice of wrapping try catch around much of
the code blocks but also working in the Try Catch Finally logic where
necessary to return negative or positive results. A good last example
was adding a search logic using the Regular Expression .NET library.

public bool SearchForums(string sPattern)
{
try
{

Regex rgx = new Regex(sPattern, RegexOptions.IgnoreCase);
foreach (var forum in ForumsList)
{
MatchCollection matches = rgx.Matches(forum.Description);
if (matches.Count > 0)
{
/// got something
}
}
return true;
}
catch (Exception ex)
{
MessageBox.Show("Regular Expression Error: " + ex.Message);
}
return false
}

This turned out to be nice because for illegal sPattern syntax the
class throws an exception with detail description of the syntax error.
I am glad it did this and not me and it also became a "help" for the
the user and not something we have to documentation.

In short, 10-15 years ago, my belief in using exception was a crutch
for not understanding code but also sometimes you had no choice before
the OOPs class did not lend itself to controlled FP methods. But I
thought it promoted bad coding overall.

Today, I believe that exception trapping is a vital necessary design
especially for environments .NET. There is still the issue of
programmings not grasp everything, but the throw exceptions are
"better" or rather design with the intent that developers will use
them to provide non-critical feedback.

You can get in trouble though when you don't understand the errors.
This last example is a good illustration where an explicit exception
catch was used but was a critical abort failure when implemented in a
different way.

The Windows Live ID SDK has an example implementation where the main
program.cs has a catch for specific exception handler for

System.IO.FileNotFoundException

like so:

using System;
using System.Collections.Generic;
using System.Windows.Forms;
using Microsoft.Win32;

namespace WindowsLiveIDClientSample
{
static class Program
{
/// <summary>
/// The main entry point for the application.
/// </summary>
[STAThread]
static void Main()
{
Application.EnableVisualStyles();
Application.SetCompatibleTextRenderingDefault(false);

try
{
Application.Run(new MainWindow());
}
//System requirement detection.
catch (System.IO.FileNotFoundException fnfex)
{
//Checking for the absence of the Windows Live
Sign-In Assistant DLL.
if (fnfex.Message.Contains("WindowsLive.ID.Client"))
{
MessageBox.Show("Please install the Windows Live
ID For Client Applications SDK.");
}
else
{
MessageBox.Show(fnfex.Message);
}
}
finally
{
Application.Exit();
}
}
}
}

Well, if you isolate this LiveID class into your own library and the
LiveID component was not already installed, then you get an exception
that is not System.IO.FileNotFoundException.

I learn the hard way that this exception was only correct when the
LiveID assembly was bound to the EXE and not a helper DLL.

The solution was simple, again, not knowing what are the possible
specific exceptions, I used catch all instead and checked for the
"LiveID" string in the exception message.

My take on it.

The sad fact is this - its here. It is what it is, libraries are done
mostly one way now and developers have no choice but get use to it and
learn how to work with it.

--
HLS


Joseph M. Newcomer wrote:

> To maximize performance, the Microsoft implementation was designed to create exception
> frames extremely quickly (the try blocks) and it was (I believe rightly) felt that any
> costs required to actually handle the exceptions could be paid at the time the exception
> was thrown. I think this is a good engineering tradeoff. So yes, throwing an exception
> is a pretty heavy-duty operation. But when an error occurs, you are going to be reduced
> to working in "people time" (that is, someone is going to have to read the error message
> and respond to it, integer seconds if not integer tens of seconds) so expending a few
> hundred microseconds (remember, on a 3GHz machine, we can execute as many as 6
> instructions per nanosecond (and that's on the OLD Pentium machines, not the Core
> architectures which can do more), so a microsecond is a *lot* of instructions. (For those
> of you who don't believe the arithmetic, look up "superscalar architecture").
>
> So having exceptions as a basic loop control structure is going to be a Really Bad Idea.
> But frankly, understanding such "spaghetti" code is extremely difficult, and trust me, the
> worst assembler spaghetti code is clear as crystal compared to some of the
> exception-driven code I've had to plow through! Exceptions are just that: something went
> wrong.
> joe
> On Tue, 22 Jun 2010 14:06:56 -0700, "David Ching" <dc(a)remove-this.dcsoft.com> wrote:
>
>> "Doug Harrison [MVP]" <dsh(a)mvps.org> wrote in message
>> news:9c1226ldk8g207f4asif27nehrcfji6g0e(a)4ax.com...
>>> On Tue, 22 Jun 2010 13:12:33 -0400, Joseph M. Newcomer
>>> <newcomer(a)flounder.com> wrote:
>>>
>>>> I'm not sure this is a good piece of advice. I use try/catch a lot; it is
>>>> essentially a
>>>> "non-local GOTO", a structured way of aborting execution and returning to
>>>> a known place,
>>>> while still guaranteeing that all intermediate destructors for stack
>>>> variables are called.
>>>> It is particularly useful in writing tasks like recursive-descent parsers
>>>> (particularly if
>>>> you just want to stop without trying to do error recovery, which is always
>>>> hard) and
>>>> terminating threads while still guaranteeing that you return from the
>>>> top-level thread
>>>> function. It is clean and well-structured way of aborting a
>>>> partially-completed
>>>> operation. An it is the only way to report errors from operations like
>>>> 'new'. Also, look
>>>> at the number of exceptions that can be thrown by std:: or boost::.
>>> Goran was not saying exceptions are bad, just that overly frequent use of
>>> try/catch is bad, which it usually is. I've been saying for a long long
>>> time that there's an inverse relationship between the number of try/catch
>>> clauses you have and the effectiveness with which you're using exceptions.
>>> There are a number of reasons for this. On the API designer side, using
>>> exceptions where return codes are more appropriate forces callers to write
>>> try/catch whenever they use the function, so that's a bad use of
>>> exceptions. You never want to turn exception usage into a clumsier version
>>> of return codes. On the user side, try/catch gets overused when people
>>> don't employ the RAII idiom and need to perform clean-up that should be
>>> handled by a destructor. Ideally, exceptions are caught far away from
>>> where
>>> they're thrown, in a top-level handler, which reports the error to the
>>> user, logs it, or whatever. It is relatively rare for well-designed code
>>> to
>>> need to handle the exception closer to the throw-point.
>>>
>> Not to mention, the overhead of throwing exceptions reduces performance if
>> many exceptions are thrown. Exceptions are not meant as a "non-local GOTO".
>> Exceptions are meant for rarely occurring ERROR conditions (you know,
>> 'exceptional' conditions!), not normal control flow. I once saw a switch
>> statement rewritten as a bunch of thrown exceptions, not a pretty sight.
>>
>> -- David
>>
> Joseph M. Newcomer [MVP]
> email: newcomer(a)flounder.com
> Web: http://www.flounder.com
> MVP Tips: http://www.flounder.com/mvp_tips.htm