From: Al on
Hi,

Niklas Matthies wrote:
<snip>
> It's not as private as one might assume; with default security
> settings you can access it via reflection. For example it's possible
> to corrupt a String object by replacing its char[] value.

Sure, you can use reflection to do interesting things. But that's a
whole other can of worms. It isn't just restricted to private data. If
Java's reflection is anything like C# then it can be use to bypass a
whole lot of things that the "static" compiler wouldn't have allowed.
This is fine. No /basic/ language invariants have been violated.

In addition, I believe most of these things _are_ covered under the
security principals, so you could simply restrict code access if you
want to avoid them.

One other thing, when you say it's possible to "corrupt" a String
object, what does that mean, exactly? Do you mean that it is somehow
possible to corrupt the virtual machine's memory integrity? I highly
doubt that.

Thanks,
-Al.


--
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

From: PeteK on
James Kanze wrote:
> PeteK wrote:
> > You can easily get rid of dangling pointers in C++ and turn them into
> > zombies instead by simply using a bolt-on garbage collector. The
> > language doesn't stop you doing that.
>
> I'm not sure I understand your point. For the purposes of this
> discussion, there are two fundamental types of data, those with
> a determinate lifetime, and those with an indeterminate lifetime
> (from the design point of view). For dynamically allocated
> objects with an indeterminate lifetime, current C++ requires you
> to explicitly use a delete expression, and make the lifetime
> determinate (and risk dangling pointers). Java just does the
> right thing. For dynamically allocated objects with a
> determinate lifetime, C++ has a standard "name" for the function
> terminating the objects lifetime, the destructor. It's a little
> wierd, in that it doesn't have the normal function call syntax,
> but big deal. Java lacks anything standard, but the convention
> seems to be established to use the name "dispose()" (although
> some of the standard classes use this name for other things).
> In the end, it comes out to the same thing. Or almost---if you
> want to, you can set state in the dispose() function in Java to
> ensure that later use is detected. Immediately. To get this in
> C++, you need something like Purify, and the runtime overhead is
> high enough that you can't use it in production code. So Java
> offers a safer solution.
>
But in C++ you can
a. Use a garbage collector
b. Add a dispose function that sets the state, then calls delete (so
you can run it non-GC)

Now you've got exactly the same situation as in Java. C++ doesn't
prevent you from doing this.

> > However in Java you are stuck
> > with the GC system and there's no way to automate the detection of
> > zombies (big assumption here by someone who's never used it).
>
> You can detect them just as easily as in C++. The big
> difference is that you don't need external instrumentation that
> makes the detection too slow to be used in production code.

You missed the word "automate". You need to add dispose functions (and
the associated checks) by hand. Using purify is somewhat simpler.

>
> > In principle it should be possible to pick up all potential
> > zombies/dangling pointers in C++ by using a sufficiently clever
> > debugging allocator.
>
> I think some systems do this. The trick is to not make the
> memory available for re-allocation as long as there is a pointer
> to it still in existance, mark it as freed somehow, and then
> instrument every single pointer dereference to check for the
> mark. (It still misses dangling pointers to on stack objects,
> of course.) The problem is that it has unacceptable runtime
> cost; ...

Unacceptable for production use, probably. Unacceptable for debugging
etc. unlikely. Purify really slows things down, but on the odd
occasion you need it it's really worth it.

Also, given that people don't seem to think that GC slows things down
too much, I can't (off the top of my head) see why freed memory blocks
can't simply be added to a list then put back into play in a big chunk.
At this stage a GC-like scan could be run to see if any pointers to
the deleted memory remain (with optional invalidation of the pointers
to cause a PE on dereference).

> ...the standard C++ model requires that all objects have
> explicit lifetimes, even when the design doesn't require it, so
> you have to check every single pointer dereference, and not just
> those where the object by design has a determinate lifetime.
> And if you think of things like the implementation of a string
> class, you'll realize that there are a lot of objects which,
> like the char array in a string, don't need explicit lifetime.

The thing is that all objects have a logical lifetime. Accessing them
after their logical life is over is an error. Java choses to define
this as "not an error" (although this is probably more to do with Java
having GC). While it might appear that a string's char array doesn't
require a specific lifetime, if you've somehow acquired a pointer into
it then the array is kept alive long after the string is dead. If you
use this pointer to access the char array then this is logically wrong,
but you have absolutely no way of detecting it.

>
> The essential thing in being able to detect the problem, of
> course, is not allowing memory to be reused as long as there is
> still an existing pointer to it. Garbage collection, in sum.
> (The Boehm collector is often used in this way, as a leak
> detector, and, with additional instrumentation in user code, to
> detect dangling pointers.)

No, there are things you can't detect. If you have an array of ints
that can legitimately take all integer values, how can you tell that
you're pointing at an array that will no longer be updated?

>
> > Admittedly this doesn't stop you assigning duff
> > values to pointers, but that's the price you have to pay for using a
> > system-level language.
>
> There are several issues at stake. The fact that you can have
> an uninitialized pointer, with undefined contents, can hardly be
> considered a feature.

I didn't say it was uninitialised, and even a default initialisation to
NULL might be no help. After all, address zero was a perfectly valid
address under DOS*. A system level language must allow the programmer
to give a pointer any value that could possibly be valid on the target
system. The problem is that very few programs require that level of
freedom, but it's a price we have to pay.

{*Actually I think making NULL equivalent to zero was a mistake. It
should be a system-defined value.}


--
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

From: Ian McCulloch on
Yechezkel Mett wrote:

> Please note that my mathematical understanding of this topic is limited
> - however:
>
> Thomas Richter wrote:
>> Allow me to jump in and throw some light on this from a mathematical
>> p.o.v.
>>
>> The complex square root is not "well defined" on the entire complex
>> plain, as there are always two possible values it can take. To make
>> it a function, you either need to carefully redefine its domain (which
>> is then a two-sheeted Riemann surface), or "cut it" somewhere. The
>> latter approach is taken in numerical applications, and the "cut" is
>> made at the negative real axis by convention.
>>
>> Which means that, if you cross that cut, the square root is discontinuos
>> (at least the so-defined function, the mathematically "proper"
>> definition requires more care but is then analytic). Now, it does make a
>> huge difference in the magnitude whether the imaginary component of the
>> argument is positive or negative, and a proper implementation of a
>> square root function must keep care of the sign of the IEEE zero, e.g.
>>
>> sqrt(+0-1) = i
>> sqrt(-0-1) = -i
>
> Should that be
> sqrt(+0i-1) = i
> sqrt(-0i-1) = -i
> ?
>
>
> It seems that what is done here is to put the negative real axis on both
> sides of the cut. Would it not make more sense to (arbitrarily) put it
> on one side or the other? Otherwise we are relying on an artefact of
> representation (the sign of an IEEE zero) which perhaps indicates where
> the zero came from, but really has no meaning to the current value
> (after all, -0 = 0).

No, the whole point of the signed zero in IEEE is that it tells us what the
sign of the value was before it underflowed to zero. Actually it is a
shame that IEEE doesn't differentiate between a +0 that overflowed from the
positive axis and a true (signless) zero. The point is that if you are
using some iterative algorithm (say, calculating sqrt(-1+x*i) for x =
1,1/2,1/4,...) then the IEEE scheme 'just works' without giving a
discontinuity in the output when x becomes zero.

>
> With rounding, a common convention is to round .5 up, although
> mathematically there is no reason to do so - it sits directly on the
> cut. With sqrt you say the location of the cut is purely convention, so
> why insist that the negative real axis must straddle it, rather than sit
> on one side or the other? (Of course, convention is convention, but it
> seems odd.)

Because it is useful that way. If you have a signed zero, then it is
logical that the positive and negative zero straddle the branch cut because
then you can often ignore the branch cut completely and still get the right
result. If it was any other way, you would have to always test to see if
you have hit the branch line, complicating the algorithm.

Cheers,
Ian


--
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

From: peter koch larsen on

Gerhard Menzl skrev:
> peter koch larsen wrote:
>
> > Gerhard Menzl skrev:
> >> peter koch larsen wrote:
> >>
> >> > Wrong again. C++ throw() guarantees that the function will not
> >> > throw anything, An empty Java throw specification on the contrary
> >> > guarantees nothing of that kind.
> >>
> >> Surely you mean to say that C++ throw() guarantees that
> >> std::unexpected() will be called in case that the function throws in
> >> spite of having promised not to.
> >
> > No. I meant what I wrote. That std::unexpected gets called is a detail
> > that does not invalidate my point, so I'm unsure what point you're
> > trying to make. Are you suggesting that std::unexpected might itself
> > throw an exception that will get by the throw() specification? If that
> > is the case, I believe you're wrong. That behaviour would rather form
> > an infinite loop.
>
> You stated that an empty exception specification guarantees the function
> will not throw anything.

Yup. In the context I believed it to be quite evident that the
indication was that no exception would escape the function. In
retrospect this is unclear.

> But what it actually guarantees is that no
> exception exits the function. Your choice of terms was at least
> misleading: someone unfamiliar with the C++ exception mechanism could
> easily interpret it as describing a compile-time check, which is
> precisely what C++ does not offer.
> To avoid this confusion, especially
> when comparing C++ with Java, which does have static checks, I think it
> is important to distinguish between "cannot throw" and "will abort if it
> throws".

Actually - to increase confusion - it should be clear that the check in
Java is only of exceptions that derive from the class Exception.
Exceptions deriving from Throwable and RuntimeError (if I remember
correctly) are not checked at all.
Also, I am unsure of the quality guaranteed by the language. Example:
(C++ used as example to avoid excessive number of classes):

bool is_small_prime(int i) { return i == 2 or i == 3 or i == 5; }
void f(int i) { if (is_small_prime(i) throw Exception("Ups"); }

void never_throws(int i) throw()
{
if (!is_small_prime(i))
f(i);
}

Will the compiler complain about the signature of never_throws? My
guess is that it will and thus you must write a swallowing catch which
in my opinion is bad for releability.

/Peter

--
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

From: Gerhard Menzl on
peter koch larsen wrote:

> Actually - to increase confusion - it should be clear that the check
> in Java is only of exceptions that derive from the class Exception.
> Exceptions deriving from Throwable and RuntimeError (if I remember
> correctly) are not checked at all.

According to

http://java.sun.com/docs/books/jls/second_edition/html/exceptions.doc.html

exceptions derived from Error and RuntimeException are not checked. The
reasons stated are very revealing:

"Those unchecked exception classes which are the error classes (Error
and its subclasses) are exempted from compile-time checking because they
can occur at many points in the program and recovery from them is
difficult or impossible. A program declaring such exceptions would be
cluttered, pointlessly."

and, regarding RuntimeException:

"The information available to a compiler, and the level of analysis the
compiler performs, are usually not sufficient to establish that such
run-time exceptions cannot occur, even though this may be obvious to the
programmer. Requiring such exception classes to be declared would simply
be an irritation to programmers."

If you ask me, this amounts to admitting that static checking of
exceptions is useless at best, which is something that the C++ standard
designers seem to have grasped from the beginning. If I remember
correctly, James Kanze used to point out that Java programmers tend to
derive their user-defined exceptions from RuntimeException because code
using checked exceptions quickly becomes unmanageable.

This raises the question what checked exceptions are good for at all. My
guess is it's situations that one would normally handle using return
codes in C++.

> Also, I am unsure of the quality guaranteed by the language. Example:
> (C++ used as example to avoid excessive number of classes):
>
> bool is_small_prime(int i) { return i == 2 or i == 3 or i == 5; }
> void f(int i) { if (is_small_prime(i) throw Exception("Ups"); }
>
> void never_throws(int i) throw()
> {
> if (!is_small_prime(i))
> f(i);
> }
>
> Will the compiler complain about the signature of never_throws? My
> guess is that it will and thus you must write a swallowing catch which
> in my opinion is bad for releability.

My own Java experience is marginal, but the way I interpret 11.2 in the
document cited above is that the compiler would complain about the
signature of f() because it misses an exception specification. Once this
is corrected, never_throws() would indeed have to wrap the call to f()
in a try/catch block. I think we agree that this is broken.


--
Gerhard Menzl

Non-spammers may respond to my email address, which is composed of my
full name, separated by a dot, followed by at, followed by "fwz",
followed by a dot, followed by "aero".



[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]