From: Seungbeom Kim on
On 2010-07-04 13:48, Andre Kaufmann wrote:
> We had a lot of trouble adding Unicode support to our C++ applications.
> To be binary compatible we used UTF8 for transportation and internally
> used UTF16 where possible. But it happened more than once that multiple
> conversions (text / xml / binary etc.) that we missed to convert the
> characters at one location appropriately. Such bugs where quite hard to
> find and we are not sure if we found them all already.

If the fact that UTF-8 and UTF-16 strings use the same type leads you
to forget the conversion and get no compilation errors, using std::string
for UTF-8 and std::u16string for UTF-16 in C++0x may help you with that.

--
Seungbeom Kim

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

From: Mathias Gaunard on
On Jul 5, 6:07 pm, Walter Bright <newshou...(a)digitalmars.com> wrote:

> Now another fellow on your team adds a global state dependency somewhere
> deep in
> that call hierarchy. How are you to know? Suppose someone else derives and
> overrides my_pure_function with an impure one? How are you to stop that?
>
> What if, between calls, someone modifies something s is transitively
> pointing to?

If you can't enforce the project team to follow the coding standards
for that given project, and to read the comments that say "this
function is pure", then I think you already have some big problems.

Saying a function is pure is actually quite similar to what we do in
contract programming. People seem to do fine with contracts that are
simply documented, without any checking done by the compiler, even if
help would certainly help debugging.


> Because of this, I suspect that trying to use purity and immutability
> purely by
> convention is doomed to failure.
>
> (And also, since the compiler cannot know about its purity and
> immutability, it
> also cannot take any advantage of that information to produce better
> code. The
> compiler cannot even cache pointer to const values.)

Some compilers (I don't know about Digital Mars, but I hope it does!)
allow to tag a function as pure using attributes. If they can deduce
with static analysis that the function isn't really pure, they could
also emit a warning.



--
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

From: Mathias Gaunard on
On Jul 4, 6:41 pm, Andre Kaufmann <akfmn...(a)t-online.de> wrote:

> In C++0x this could be simulated by using lambda functions:
>
> auto Div = [] (int x , int y) { return x / y; };
> auto Div5 = [&Div](int x) { return Div(x, 5); };
> int res = Div5(10);
> printf("%d", res);

Why aren't you using [&] and auto for res? That would be closer to the
ML code.

>
> More equivalent would be this invalid C++0x code:
>
> auto Div = [] (auto x , auto y) { return x / y; };
> auto Div5 = [&Div](int x) { return Div(x, 5); };

Some people are working on providing this as an extension to C++0x
lambdas in GCC.

I don't really know why they chose to make lambdas monomorphic. This
is disappointing. Thankfully DSELs don't have that limitation.



> b) If I understood it correctly, function and type expansion is deferred
> You deal with functions like with any other types and
> variables. You can modify them, pass them to other functions
> and so on.
> Besides flexibility, it also enables the compiler to efficiently
> generate code through deferred function expansion.
>
> It's not about that it can't be done in C++. You would have to use
> templates, lambda functions, meta template programming in C++.
> But at the cost of readability and flexibility.

Huh? It's exactly the same. You can copy and modify function objects
just fine in C++.


> If there would be meta information about classes, functions and
> parameters available, a wrapper generator could emit code, which
> would enable other C++ compilers or even other languages to call the
> functions or even use the classes directly which are in a (compiled)
> library.
> A kind of C++ ABI.

That can already be done just fine.
Only problem is that one of the most important compilers, MSVC,
doesn't follow the standard C++ ABI.


--
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

From: nmm1 on
In article <i0qjpf$nk6$1(a)news.eternal-september.org>,
Walter Bright <walter(a)digitalmars-nospamm.com> wrote:
>
>I object to your analogy with random number generators. While fp
>result can appear to be random, they are not. For the careful fp
>programmer, he can get reliable, predictable, correct results
>provided that the underlying implementation takes care as well.

Well, I didn't say what you thought that I did, but no matter.
Let's address what you seem to have understood me to say, as it's
perfectly relevant.

I am afraid not. That is true for only some architectures and
implementations, and is one of the great fallacies of the whole
IEEE 754 approach. Even if a 'perfect' IEEE 754 implementation
were predictable, which it is not required to be.

Once you get away from the modern mainstream, it is fairly common
for a compiler to choose algorithms or hardware[*] based on dynamic
considerations. You then get unpredictable results. Sorry, but
but that really does happen, it's allowed by almost all languages
(including C++), and it isn't going to change.

The point is that it is usually the software equivalent of branch
prediction, which leads to unpredictable times, but usually gives
enough of a performance gain that every architecture does it. It
is less common in compilers, but can give considerable speedups,
and I have seen it even at the operating system and hardware
levels.

[*] Consider using a GPU only if another core isn't using it, or
switching between equivalent algorithms based on dynamic heuristics.
Parallelism brings this in, redoubled in spades, as reductions are
always unpredictable unless ridiculous contortions are taken to
avoid that.


Regards,
Nick Maclaren.

--
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

From: Andre Kaufmann on
Joshua Maurice wrote:
> On Jul 4, 10:41 am, Andre Kaufmann <akfmn...(a)t-online.de> wrote:
>> On 04.07.2010 00:50, Dragan Milenkovic wrote:
>>> However, what would be really nice is something to make
>>> the pimlp idiom obsolete. :-D
>> Yep, a module concept is IMHO badly needed in C++. Unfortunately it has
>> been removed from the first C++0x proposals.
>
> Can you point at any proposals or ideas? I'm not sure how "modules"
> would remove the need for pimpl. pimpl is an idiom whose goal is to

Why is pimpl needed in C++ ?
As you already correctly stated to decouple source code and to increase
compilation speed.

But why can't that be done by the compiler automatically ?
Why do we have to use such an old stone age relict of code decoupling
like header files and pimpl in C++ ?

Before I try to explain that, let's have a look how compilation would be
done by using C++ modules:

Let's assume we have the following class in a single module file:

class Test
{
public: void foo() {}
};

The C++ compiler has all the required information to generate
precompiled code and a precompiled header file, when the module is
imported for the first time shortly after compilation has started.

E.g. in another module:
#import "Test"

If the same module is used anywhere else in the same project the
compiler can (better said could) check if it already has compiled the
code and just use the precompiled header file (or code for inlining).

No need to reparse the whole module again - why should the compiler do
that anyways ? It has already compiled the code !!!

Now back to C++:

Why can't a C++0x compiler just do the same ?
The simple answer is because of the dumb preprocessor.
Every translation unit can use different macros and therefore the
preprocessor can emit different code.

The result is:

a) The compiler has to compile the same code over and over again
b) The C++ developer, to prevent too much code to be recompiled,
has to decouple the code manually.
But as soon as templates are involved you are lost in C++
and can't decouple appropriately anymore

What could be done about that in C++? Quite simple restrict macro
expansion to a single header file only or to the whole project.
But not that a macro declaration is propagated to all other header
files, which include the header file with the macro definition.

That would break backwards compatibility, therefore modules would have
to be introduced, which can just do that.
Simple but effective:

- No precompiled header files anymore
- No pimpl needed anymore
- Compilation speed increase by factor 100 and above in big projects

Most languages (besides C) just do that.
E.g. the D-language is a proof of concept that this can be done with a C
style language.

And with a two pass compiler (like C#) you don't even have to include
anything. The compiler just does want it should do - compile the code.

(Disclaimer: I once thought header files and preprocessor to be a good
design decision too - now I think just the opposite)

I don't know why B.S. has chosen to let macros propagate over header
file boundaries. Perhaps this was just done to be C (preprocessor)
compatible or he didn't think about (when he made that decision) /or
wasn't aware of the implications.

I think the latter:
Because if you read in his FAQ about compilation issues, he answers to
"why do my compiles take so long", that the program is "more likely"
poorly designed. I think it's just the opposite: The C preprocessor /
header file system is poorly designed and C++ unfortunately has adopted it.


I know, I don't make that much friends here with such statements. But if
I wouldn't care about C++ I would just stay away and wouldn't write long
posts. If I would be just trolling, I wouldn't write that long post either.

My intention is simple:

Increase the understanding of why C++ needs modules.

> [...]

Andre

--
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]