From: Bo Persson on
Walter Bright wrote:
> Bo Persson wrote:
>> Requiring compiler support doesn't make a feature clean, it makes
>> it dirty.
>
> I tend to feel that what makes a language dirty is when it's loaded
> with special cases and arcane rules. A clean language is not
> necessarily one with a small number of features, it is one with
> features that are orthogonal yet fit together in a predictable and
> sensible manner.
>> I believe that's where James and I disagree with Walter.
>> If you have a core language support that allows std::string to be
>> efficiently implemented as a library, that means that other
>> similar classes can be efficiently implemented as well. By someone
>> other than the compiler writer.
>>
>> The fact that it can be done as a library only, with no specific
>> magic, actually shows that the language is stronger, not weaker.
>
> D can, as well as C++ can, create library defined strings, vectors,
> etc. I get the impression you feel that D somehow sacrificed the
> ability to create user defined types by adding core support for
> arrays and strings.

I don't know D well enough to judge that. I just feel that if you say that
it works better with compiler support, that is bad news for library writers.
Perhaps we then cannot write good enough libraries for other types either?


Perhaps my aversion to compiler magic goes all the way back to my first
experience with Pascal. Easy to learn, and everything. Among other features,
it has read and write statements that looks just like procedures, where you
can have any number of parameters, in any order, and even add some
formatting hints. Very nice!

WRITE('The result is '; result:3:2, '. Processing time ', time:5, '
seconds.');

Now, how do I write that kind of an output procedure for my user-defined
types? Of course I cannot, it is all compiler magic, and only works for
built-in types! Blah!

So I don't like built-in types. :-)

>
>
>> The fact that Walter designs and implements his own language,
>> puts him in a
>> different position than most of us. Perhaps that influences his
>> point of view? Or maybe his views is the reason for doing it?
>
> Actually, it was using languages that did support strings and arrays
> much better than C++ that led me to want much better than what
> could be done with a library. Maybe it's a coincidence that such
> languages are almost always considered to be easier to learn, use
> and more productive to program in than C++. Strings and arrays are
> ubiquitously used, so even small improvements in their usability
> can pay big dividends in the long run.
>
>
>> I am very happy when I can implement, or extend, some feature
>> without specific compiler support.
>
> There's no way you're going to get std::string, std::vector or
> std::complex to work as well as core support for such with existing
> C++ core features.

In that case I would very much prefer if we could add the necessary core
features, rather than add the types to the core language. (I have noticed
that you have done some of that in D, but not enough obviously).

>
> But I do have a couple challenges for you <g>.
>
> 1) Extend std::string to support UTF-8.

Don't know. Probably can't be done.

>
> 2) Extend std::string to support strings with embedded 0's.

It can handle that, as long as you don't try do convert to or from char*.

Haven't found much use for it though.


Bo Persson



--
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

From: Walter Bright on
James Kanze wrote:
> Walter Bright wrote:
> [...]
>> The easiest way to deal with roundoff errors is to increase the
>> precision.
>
> It's also the most naive:-). If you're loosing one bit
> precision per iteration, and you're iterating a couple of
> million times, you're going to have to increase the precision a
> lot.

Nevertheless, more precision buys you more computations you can do
before having to deal with roundoff problems. Often, it's enough.

> And please take this in the somewhat humorous veine it is meant.
> Increasing precision is a solution sometimes, and I'm pretty
> sure that you do understand the problems, and know enough to
> verify that it is the correct solution in your case. But just
> as obviously, it's not a silver bullet, and it won't really help
> the naive user.

It will help the naive user simply because less often will he fall into
the roundoff swamp. And yes, any serious numerical analyst must take
steps to ensure that roundoff errors are not dominating his results,
regardless of the precision.


>> Any compiler that doesn't at least provide the full precision
>> of the underlying hardware cannot be taken seriously as a tool for doing
>> serious numerical analysis. It tells me the compiler vendor has some
>> agenda other than paying attention to the numerics crowd, and I wouldn't
>> use such a tool for serious numerical work.
>
> I think the problem is somewhat more complex. If you use long
> double on an Intel, you do not have the guard bits and other
> stuff

Are you sure about that?

> that helps ensure correct rounding.

Rounding is a kludge to soften the blow of too few bits. More bits is
better.

> As I understand it,
> the idea behind extended precision was that it should be used
> for intermediate results, and not for storing data---the idea is
> that an expression like a*b/c will always give the "correct"
> results if the correct results fit in a double.

If you store a double to memory, you lose guard bits and sticky bits.
Those bits are not stored in the FPU registers, either. They only exist
when, say, an ADD is being computed and the result is adjusted to fit
into the bits of the destination register. In other words, the
calculation rounded to fit the 80 bit registers. Then, it gets rounded
again when converted to a double to write into memory.

It's not better to round twice than to round once. More bits is better
when dealing with anything but the final result printed. Rounding should
be delayed as long as possible.

Here's an exercise for you. Do a chain of floating point calculations
with your calculator. Do it once by writing out the intermediate results
rounded to 2 decimal places, then reentering that value for the next
calculation. Do the whole thing again by using the full 10 digits on its
display.

You'll see what I'm talking about.


>> Unlike C++, a conforming D compiler is *required* to provide a type
>> representing the max precision the underlying hardware is capable of.
> For what definition of "the underlying hardware is capable of"?
> Obviously,
> this isn't what you mean, but while I think what you mean may be
> obvious on most general purpose machines, I'm not sure that it
> is precise enough for a standard.

I'm writing a newsgroup posting, not a legalese document that someone is
going to try to subvert.


> The C++ standard, of course, follows the lead of C, and allows
> the compiler to use extended precision in intermediate results,
> without requiring it. The results can be surprising, to put it
> mildly.

I know why the standard is written that way, but I simply do not agree
with the philosophy of dumbing down the language to capability of the
minimum machine it is defined to run on. If I buy a machine with 128 bit
floats, dad gum I want to use those bits. I don't care that the VAX has
less precision, I didn't buy a VAX. Why would I want to use a tool
dumbed down to VAX precision?

At least Java relented (after heavy pushback from users) with requiring
roundoff to 64 bits for all intermediate results. I'm just not so sure
the pushback was for numerical accuracy reasons, or just because it's
slow on the x86 to do that rounding.


>> And btw, g++ does offer 80 bit long doubles on the x86. Kudos
>> to the gcc implementors.
> I'm willing to bet that even VC++ uses extended precision on an
> x86, even if it won't allow you to declare a variable of that
> type. Not doing so slows down the math processor considerably.

While better than nothing, I don't consider that as support for extended
precision. Furthermore, I was very disappointed that while the x64 adds
all kinds of fancy floating point support, it completely omits extended
precision both for intermediate values and for types (other than legacy
support for the old instructions). I was hoping to see a 128 bit
floating point type.

--
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

From: Gabriel Dos Reis on
Walter Bright <walter(a)digitalmars-nospamm.com> writes:

| Alf P. Steinbach wrote:
| > But even if so, how does an Imaginary type help with that, when x+i*y
| > can always be represented as Complex(x,y) and vice versa?
|
| This explains it better than I can:
| http://www.cs.berkeley.edu/~wkahan/JAVAhurt.pdf

Proof by authority.

OK, let's walk throught this.

C++'s library has operators overloaded for mixed operands, so no
wasteful arithmetic. It is just as in:

[...]

James Gosling has proposed. Kahans imaginary class allows real and
complex to mix without forcing coercions of real to complex. Thus his
classes avoid a little wasteful arithmetic ( with zero
imaginary parts ) that compilers can have trouble optimizing
away. Other than that, with overloaded infix arithmetic operators, you
can't tell the difference between Kahans syntax and Goslings.


Futhermore C99 standard type _Complex (equivalent of std::complex) has
arithmetics specified to take care of the branches and discontinuities.

So, there must be something more to your arguments than you have
said. It would very useful if you could articulate them for some of us
that are ignorant in numerics, instead of invoking proof by authority.

Thanks,

--
Gabriel Dos Reis
gdr(a)integrable-solutions.net

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

From: Seungbeom Kim on
Terry G wrote:
> Here's some Undefined Behavior from the C++ Standard that I have (which
> might be old) from 5.8-1.
>
> "The behavior is undefined if the right operand is negative, or greater than
> or equal to the length in bits of the promoted left operand."
>
> Why not just define the behavior?
> If the right operand is negative, then its the same as a left-shift, by the
> absolute value.
> If its greater than or equal to the length in bits of the promoted left
> operand, then its -1 if the number was negative, otherwise 0.
> I don't really care what definition is chosen. Pick something reasonable.
> Then, whatever hardware instructions are available should be used to make it
> so.

Assuming the underlying hardware doesn't define such behaviour, the
compiler would have to generate a code like this for (X << Y):

if (Y < 0)
evaluate X >> Y instead
else if (Y >= bit_sizeof(X))
if (X < 0)
return -1
else
return 0
else
evaluate X << Y

Not all programs need/can afford this overhead. When the input is known
to be in a certain range, the program can skip the check. Otherwise, the
program can do the check itself. This is the merit that leaving some
corner cases undefined gives.

--
Seungbeom Kim

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

From: Simon Farnsworth on
Terry G wrote:

> Here's some Undefined Behavior from the C++ Standard that I have (which
> might be old) from 5.8-1.
>
> "The behavior is undefined if the right operand is negative, or greater
> than
>
> or equal to the length in bits of the promoted left operand."
>

I assume you're describing operator>> on integral types here.

> Why not just define the behavior?
> If the right operand is negative, then its the same as a left-shift, by
> the absolute value.
> If its greater than or equal to the length in bits of the promoted left
> operand, then its -1 if the number was negative, otherwise 0.
> I don't really care what definition is chosen. Pick something reasonable.
> Then, whatever hardware instructions are available should be used to make
> it
>
Thing is that with UB in the standard, the compiler can implement operator>>
with a single machine instruction. With your definition, operator>> on
Intel x86 becomes a test and branch to code that negates the right operand
and continues in operator<<, then two tests and branches for the case where
the length in bits isn't long enough, then (once all the error cases are
past), the same single machine instruction as before.

In this particular case, the well-defined case is also the common case; if
you need the more complex code, it's not hard to write it as code like
(untested for illustration only):

// Assuming 8-bit chars
template<typename T, typename S> T shiftright(T x, S y)
{
if(y < 0)
{
return shiftleft(x,-y); // definition of shiftleft is for the reader
}
if(y > (8 * sizeof(T))
{
if(x < 0)
{
return -1;
}
else
{
return 0;
}
}
return x >> y;
}

Bear in mind that on x86 (one of the most common architectures on people's
desktops), the compiler has to generate code like this for your proposed
definition of operator>>.
--
Simon Farnsworth

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]