From: Alf P. Steinbach /Usenet on
* Seungbeom Kim, on 01.07.2010 16:49:
>
[snip]
> The points are all valid. However, it is also "given" that size_t is
> unsigned, and so is size_type for the standard containers. Employing
> signed integers for sizes and counts inevitably leads to mixing them
> with unsigned integers which are given by sizeof() and container.size(),
> and a flood of corresponding warnings:
>
> for (int i = 0; i< v.size(); ++i) ... // warning!
>
> for (unsigned i = 0; i< v.size(); ++i) ... // no warning
> // or better(?), use std::vector<...>::size_type
>

On the contrary, using size_t for sizes and counts generally lead to mixing them
with signed integers (as an obvious example, consider std::count_of).

The point of using signed sizes is to mostly avoid that error prone mixing.

The problems of unsigned arithmetic cannot be completely avoided, but they can
then be significantly reduced, in particular for expressions involving
subtraction, so avoiding spending time on fixing subtle and not-so-subtle bugs
resulting from e.g. a value-changing promotion of signed to unsigned, and also,
not the least, avoiding spending time on code special cased on a large number of
"best fit" types -- they solve nothing but add to size and complexity.


> I'm curious how you deal with this issue if you're diligently following
> your own arguments.

The key idea is to define the support lacking in the standard library, namely

* a signed Size type (as ptrdiff_t), and a ditto Index type,

* size, startOf and endOf functions where size() yields Size, and

* to support those functions, automatic iterator type deduction.

Then your example becomes

for( Index i = 0; i < size( v ); ++i )

which works no matter if v is a std::vector or a raw array or whatever, and
which also works just as nicely for a countdown,

for( Index i = size( v ) - 1; i > 0; -- )

For details & discussion see <url:
http://alfps.wordpress.com/2010/05/10/how-to-avoid-disastrous-integer-wrap-around/>.


Cheers & hth.,

- Alf

--
blog at <url: http://alfps.wordpress.com>

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

From: Kaba on
Seungbeom Kim wrote:
> > One can imagine an unsigned floating point type by stripping off the
> > sign bit, and perhaps giving an additional bit to the mantissa. Such a
> > type would have similar problems as listed here for the integers.
> > Interestingly, people don't seem to have anxiety over this lost bit,
> > although the situation is exactly as with the integers.
>
> Probably because floating-point types already have wider ranges,
> and probably because people already take it as a matter of fact
> that floating-point values are not accurate.
>
> An IEEE single precision floating-point format can already represent
> 24 bits of mantissa, i.e. about 7 decimal digits, which is enough for
> many purposes, and adding one bit doesn't change things that much:
> it doesn't make it capable of representing 1/3 or 0.1 exactly anyway.
> And that's "single" precision (usually the 4-byte type "float");
> an IEEE double precision floating-point format (usually the 8-byte type
> "double") has 53 bits of mantissa, i.e. about 16 decimal digits.
> And the inexactness stays the same.
>
> Things might have been different for integers in the old 8-bit and
> 16-bit days, where the space was limited, out of which every possible
> bit had to be utilized. 32767 and 65536 make a big difference as
> the highest representable integer. And no inexactness is tolerable.
> That's probably why size_t had to be unsigned. In the 64-bit days,
> who cares if size_t cannot represent more than 2^63-1 instead of 2^64-1?

Agreed on everything, especially the last paragraph. It's only now with
64 bit native integers that it is practically possible to even start
discussing what the signedness of size_t should be (since now the range
isn't an issue anymore).

> > 5. Summary
> >
> > I think all of the previous can be summarized as follows. Most of the
> > time we are interested in modeling the ring of integers (math Z), even
> > though we'd only use the non-negative values. This is because values are
> > rarely simply stored: rather, they are used for further computations. We
> > want to (or most often implicitly) think of all operations (+, -, *) as
> > if we were working in Z. Because of finite storage, this does not hold
> > on the limits. But it does hold most of the time, that is, when we work
> > on the neighborhood of zero. Being able to work in this neighborhood
> > safely and cleanly is important for bug-free programs. This is what
> > signed integers offer over unsigned integers.
>
> The points are all valid. However, it is also "given" that size_t is
> unsigned, and so is size_type for the standard containers. Employing
> signed integers for sizes and counts inevitably leads to mixing them
> with unsigned integers which are given by sizeof() and container.size(),
> and a flood of corresponding warnings:
>
> for (int i = 0; i < v.size(); ++i) ... // warning!
>
> for (unsigned i = 0; i < v.size(); ++i) ... // no warning
> // or better(?), use std::vector<...>::size_type
>
> I'm curious how you deal with this issue if you're diligently following
> your own arguments.

Yes, this is a problem. Ideally, as you might guess, my answer would be
to turn size_t into signed: that would follow my general reasoning. Of
course, that would break existing code, so that probably can't be seen
in the horizon.

Practically, I will have to assume that the used values are in the range
which is covered by the non-negative values of the signed integer (to
allow for "lossless" conversion), and ignore the signed/unsigned
warnings.

If I take two offsets

std::size_t aOffset = ...;
std::size_t bOffset = ...;

and want to compute their difference, then I need a signed integer value
to hold the result, in general (for example 4 - 3). Thus: it is possible
to use signed integers solely, but it is not possible to use unsigned
integers solely. Of course, it is possible for the result of an
operation between signed integers to be out-of-range: this is a
consequence of finite storage. However, why this is acceptable is that
it only happens far away from zero: locally near the zero it seems to us
that the world consists of all integers.

--
http://kaba.hilvi.org

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

From: nmm1 on
In article <slrni2pcam.qsi.mordor(a)fly.srk.fer.hr>,
Zeljko Vrba <mordor.nospam(a)fly.srk.fer.hr> wrote:
> On 2010-07-01, Francis Glassborow <francis.glassborow(a)btinternet.com> wrote:
>>
>> The main reason fgor using default arguments was that it reduced
>> problems with multiple constructors where you could not use a wrapper to
>> forward to the genral version. That has been fixed in C++0x and so the
>> largest motive for using default arguments has gone.
>>
> For me, the largest motive for using default arguments is to extend
> functionality of methods without breaking existing code, where
> overloading is not practical.

In my view, the main reason to use default arguments is when writing
code where they are a natural requirement of the algorithm. That is
not rare.

Yes, you can use other methods, such as overloading, to emulate the
facility, but it's a first-class way to introduce the most obscure
"gotchas". I have seen some truly evil ones :-(

On the other hand, using default arguments to emulate overloading
has exactly the same problems - though I take your point about the
use for transparent extensions. But that's a necessary hack, not
something desirable in itself.

My viewpoint is language-independent, incidentally.


Regards,
Nick Maclaren.

--
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

From: Walter Bright on
Edward Diener wrote:
> I have never understood such reasoning. If I have an integer value which
> I know must be unsigned, why would I not use an unsigned integer rather
> than a signed integer ?


Because unsigned types tend to be 'sticky', i.e. (1 + 1u) is an unsigned. This propagation of unsigned-ness is subtle and surprises even experienced programmers.

There was a long thread on this over on the D n.g., and I must say that although I started from your position, I'm slowly moving towards using unsigned only if you expect the values to have the high bit set.

--
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

From: Francis Glassborow on
Zeljko Vrba wrote:
> On 2010-07-01, Francis Glassborow <francis.glassborow(a)btinternet.com> wrote:
>> The main reason fgor using default arguments was that it reduced problems with multiple constructors where you could not use a wrapper to forward to the genral version. That has been fixed in C++0x and so the largest motive for using default arguments has gone.
>>
> For me, the largest motive for using default arguments is to extend
> functionality of methods without breaking existing code, where
> overloading is not practical.

Please give me an example because I can make no sense of your claim.

void foo(int, float, int. double);
inline void foo(int i, int j, double d){
return foo(i, 0.0, j, d);
}

How can that break anything? And it works just as well for member functions.

Note that there is only one effective function body and only one place to make changes.

--
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]