From: RFOG on
If you step into second delete perhaps can find what's happening.

BTW new/delete are runtime stuff, then I think OS exceptions does
nothing to do with this.

On Fri, 14 May 2010 17:48:12 +0800, "mos" <mmosquito(a)163.com> wrote:

>ok£¬It seems I should discrble the reason.
>I try the windows SEH like:
>
>#include <Windows.h>
>
>int on_exception(PEXCEPTION_POINTERS pExceptPtrs)
>{
>print("catched the exception");
> return EXCEPTION_EXECUTE_HANDLER;
>}
>
>int main()
>{
> __try
> {
> char* p = new char;
> delete p;
> delete p;
>}
> __except(on_exception(GetExceptionInformation()))
> {
> }
>}
>
>But the pragram is deadlock.
>if I use "char* p = 0; *p = 0" the pragmram will print
>
>catched the exception
>
>That's all.
>
ÿþM
From: David Lowndes on
> The code will cause msvcr100d.dll deadlock at GetProcAddress(hlib,
>"MessageBoxW")))

Under a debug build both VS2008 SP1 & VS2010 are consistent - they
display an assertion message telling you your code has a bug.

For release builds, VS2008 SP1 appears to ignore the issue, while
VS2010 causes a message box to get displayed from the depths of
HeapFree.

Given your intended desire to catch the problem, I'm not sure what to
advise.

Dave
From: mos on
Thanks for your reply.
I test in my project team, some the same with you and others the same with
me
I am still confused.

"David Lowndes" <DavidL(a)example.invalid> write:
>> The code will cause msvcr100d.dll deadlock at GetProcAddress(hlib,
>>"MessageBoxW")))
>
> Under a debug build both VS2008 SP1 & VS2010 are consistent - they
> display an assertion message telling you your code has a bug.
>
> For release builds, VS2008 SP1 appears to ignore the issue, while
> VS2010 causes a message box to get displayed from the depths of
> HeapFree.
>
> Given your intended desire to catch the problem, I'm not sure what to
> advise.
>
> Dave


From: Tamas Demjen on
mos wrote:

> But the pragram is deadlock.
> if I use "char* p = 0; *p = 0" the pragmram will print

That's because there's a difference between dereferencing a NULL pointer
and addressing random memory. The CPU tries to protect you from causing
damage whenever it can, but it is possible for a program to overwrite
its own data structures in such a way that the CPU can't notice it. The
result of that is unpredictable. Accessing a NULL pointer itself is
relatively harmless (unless it's a result of a random overwrite).
Addressing random memory or double deletion is completely unpredictable,
and you may or may not get an access violation. An infinite loop is even
possible.

Personally I think it would be safer if 'new' and 'malloc' always
wiped out the allocated memory, and if 'delete' and 'free' zeroed out
the pointer, even in release mode. There are certain types of
applications where the added security outweighs the performance
penalty. However, it is unreasonable to expect that the compiler can
always protect you from shooting yourself in the foot. If security is
of utmost importance, you have to implement fault tolerant programming
techniques, such as sandboxing (file system and process isolation),
watchdog, tandem, voting, etc.

Tom
From: Tamas Demjen on
Tamas Demjen wrote:
> you have to implement fault tolerant programming techniques

And if your data is important to you, use time machine/auto save/
journaling/audit trailing/incremental backup (just different
implementations of the same idea). That way you can limit the
extent of a catastrophic program failure, like your deadlock.

Tom