From: Joshua Maurice on
On Jul 6, 7:42 am, ThosRTanner <ttann...(a)bloomberg.net> wrote:
> On Jul 6, 2:50 am, Andre Kaufmann <akfmn...(a)t-online.de> wrote:
>
> <snip>
>
> > And with a two pass compiler (like C#) you don't even have to include
> > anything. The compiler just does want it should do - compile the code.
>
> > (Disclaimer: I once thought header files and preprocessor to be a good
> > design decision too - now I think just the opposite)
>
> I think this would be a bad thing. With a 2 pass compiler, rather than
> separate headers (or at least interfaces) and implementation, if you
> alter the implementation, you need to recompile the clients.
>
> That's really bad (IMHO, YMMV, etc)
>
> One of the problems with C++ is that you don't really implement an
> interface. Instead you provide a header file which is part
> implementation, part private data, and part interface. This has the
> mildly unfortunate side effect that your clients can be dependent on
> your private data. pimpl alleviates that at the header level, but not
> at the implementation level.

Now, correct me if I'm wrong, but I think this is Andre Kaufmann's
idea. It seems intriguing.

Let's look at the Java compilation model. There are java files. The
first pass of the compiler on any particular java files does not
reference any other .java file or compiled .class file. This first
pass identifies all "exported" type names and all "imported" type
names (to be brief). The second pass of the compiler then goes along
and uses the information from pass 1 to finish the compilation of all
of the java files, referencing information across .java and .class
files.

I think Andre is very much promoting a Java style compilation model.

I've been playing around with such a compilation model in my spare
time, specifically Java. I have been trying to write a correct
incremental build system for Java (and C++, and extensible to any
other sane language) in my spare time. It took a while, and it's no
where near finished, but I came to realize that

On Jul 6, 7:42 am, ThosRTanner <ttann...(a)bloomberg.net> wrote:
> With a 2 pass compiler, rather than
> separate headers (or at least interfaces) and implementation, if you
> alter the implementation, you need to recompile the clients.

is quite incorrect.

Let me further formalize what Andre is saying. Consider some Java
source file. That source file defines an external interface and an
internal implementation. If the internal implementation changes, there
is no need to recompile other source files, though you may / will have
to relink the library or executable which contains the affected object
file.

Thos is assuming a GNU Make like model where all dependencies are at
the file level. With such an assumption, he is correct that any change
necessarily requires a recompile of all "clients", or more formally
all source files which directly "import" the changed-source type
name.

However, it is not required to use a GNU Make style "file level" build
system. My build system which I'm working on in my spare time does not
have this file level restriction. Let's take Java as an example.
Here's a gross simplification of the process: when my build system
detects a changed source Java file, it builds that Java file. It then
compares the output class file vs the previous output class file. If
there is no change, then the source code change was a no-op. If only
an internal implementation detail changed, then there is no need to
continue cascading the build downstream. If there was an a changed to
its "exported interface", then all direct dependents need to be
recompiled as well. Cascade down the dependency graph until we reach a
state where the recompiled subtree's leafs are all no-op recompiles,
all changed-source source files have been recompiled, and no node
outside the recompiled subtree could be affected by a no-op
recompile.

I don't think this is that hard to do. Besides, the GNU Make model for
build systems is fundamentally broken for incremental correctness.
However, the important thing is that this would be a huge breaking
change to C++, effectively that of creating another programming
language, entirely not backwards compatible.

PS: I can further explain why the GNU Make model for build systems is
fundamentally broken if requested. It suffices to say that I'm
relatively sure that all of you have seen cases where a GNU Make
incremental build was incorrect, even when "properly" implemented ala
Recursive Make Considered Harmful.
http://aegis.sourceforge.net/auug97.pdf
My assertion is that a truly incrementally correct build system should
cover all possible deltas that can be in checked into source control.
Corner cases which break the model include:
1- Changed compilation options do not trigger a rebuild. Ex: changing
a preprocessor command line option.
2- Removing a node in the graph with file wildcards does not trigger
required rebuilds. Ex: removing a cpp source file will not relink the
result library or executable.
3- Adding a node in the graph with file search paths does not trigger
required rebuilds. Ex: adding a new header file which "hides" another
header file on the include path.
Also see:
http://www.jot.fm/issues/issue_2004_12/article4.pdf
Such things can be made correct somewhat easily for C++ in GNU Make,
but it really becomes non-idiomatic usage. Also, it's borderline
impossible to get other languages like Java to have a fast correct
incremental build with GNU Make.


--
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

From: Walter Bright on
Mathias Gaunard wrote:
> I personally don't understand the point of non-recoverable exceptions
> at all. If they're non recoverable, it means it is something that
> should *never* happen, and therefore is a bug in the program itself.
> The program might as well abort and terminate directly. Trying to
> clean up in the face of something that should never happen in the
> first place cannot work, and might actually lead to even more errors.

Yes, you're quite right.

Being able to catch them, however, can make debugging a program easier.

--
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

From: TheGunslinger on
Reviewing this thread....

Seems like nothing being discussed within it now pertains to the
original topic.

Whasup w/ that, ppl?

How about some new threads to handle the current topics?

Trying to follow all the digressions and having to wade through all
the digressions is wasting time and OVERLOADing this thread.

IMHO...

MJR

On Mon, 21 Jun 2010 17:43:03 CST, Anton Zakitniy
<rivasgames(a)gmail.com> wrote:

>Hello!
>Forgive me if my question is stupid and english is not native for me.,
>but it worries me!
>I'm not very experienced
>programmer. I really like C + + and I want to become a good
>programmer.
>But something that bothers me.
>Will C + + language is enough demand in the near future?
>Will it continue to use many, many companies and many, many projects,
>not only for operating systems and games?
>I not want to C + + is outdated so soon and C# will take up positions
>at the C + +.
>I would be happy to know the answer from a programming guru!
>
>I wish you all the best!

--
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

From: Walter Bright on
nmm1(a)cam.ac.uk wrote:
>> I have no problem with, for example, a "fast float" compiler switch
>> that explicitly compromises fp accuracy for speed. But such behavior
>> should not be enshrined in the Standard.
>
> Eh? The standard has the choice of forbidding it or permitting it. Are
> you seriously saying that it should be forbidden? C++ is already slow
> enough compared to Fortran on numeric code that making it worse would be
> a bad idea.
>
> What I am saying is that such matters are not currently specified
> by the standard, and therefore a conforming implementation is free
> to choose what it supports and what it makes the default.

I understand that's your position. I disagree with it. The standard should set a
higher bar for accuracy.

The standard setting a higher bar does not impair in any way an implementation
offering a switch to enable faster floating point with decreased accuracy.

--
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

From: Walter Bright on
Joshua Maurice wrote:
> On this question, could someone define "pure" exactly in context
> please? The possible counterexample I immediately thought of union
> find. A find operation on a forest of rank trees in union find is
> "logically const" in that it doesn't change observable state, but it
> does greatly mutate internal state to get better performance on
> subsequent lookups. operator== on some data structures may do
> operations like this to optimize future operations. However, this
> reminds me a lot of the mutable keyword as "an exception" to const
> correctness. Perhaps a similar concept is needed for pure functions.


Pure functions would not allow "logical const". While logical constness hides
the mutation from the programmer, the mutation is still very much there (as you
said) and the consequences are the function cannot be treated as pure by the
compiler. For example, the usual synchronization issues arise when doing
concurrent programming with such impure functions.

--
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]