From: Jorgen Grahn on
On Sun, 9 Nov 2008 20:00:31 CST, marlow.andrew(a)googlemail.com <marlow.andrew(a)googlemail.com> wrote:
> A while ago I wrote a tool to produce a dependency graph for C/C++
> libraries using graphviz.

[moved up from near the end]

> Note: I have been saying C/C++ when actually, depdot works for any
> ".a" or ".so" library.

*That* is useful information. So you use the symbols in the libraries
-- that was not obvious from what you wrote above. And it makes it
distinctly different from e.g. Doxygen. Whenever you describe the
tool, you should clarify this early on, I think.

I think that developers in general don't use the object files as
well as they could. It's surprising how often I can find something
useful quickly by feeding an object file, library or executable
through nm(1) (the symbol lister under Linux and at least some
other Unixes). Feeding the output through grep, perl, sort and/or
uniq can also help a lot.

....
> I see from the statistics that it is hardly used and I am puzzled as
> to why. When I wrote it there was no other comparable tool, as far as
> I could see. The problem of making sense of a large project with lots
> of tangled C/C++ libraries comes up time and time again. Well, at
> least it does for me. So it seemed it would fill a need. But the
> statistics seem to say otherwise. Is this because it is not generally
> a problem?

In my limited experience, you typically use a few stable, extermal
libraries, and no "in-house" libraries. The software itself may be
split in .a/.lib files (static libraries or archives), but mostly
because it modularizes the Makefile nicely. These libraries tend to
be few, map 1:1 to subdirectories, and have naturally simple
relationships (no risk of circular dependencies).

So no, to me this has rarely been a problem.

....
> Also, I am ware that there is no such language
> as C/C++!!! It's just that I see the problem most often in either C
> projects or C++ projects.

Some people (including myself) prefer to see "C and C++", and may be
put off by "C/C++" even if you point out that you know there is a
difference. It's like the alternative abbreviations of "Science
Fiction" -- call it SciFi and certain people will stop listening.

/Jorgen

--
// Jorgen Grahn <grahn@ Ph'nglui mglw'nafh Cthulhu
\X/ snipabacken.se> R'lyeh wgah'nagl fhtagn!

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

From: AnonMail2005 on
>
> In my limited experience, you typically use a few stable, extermal
> libraries, and no "in-house" libraries. The software itself may be
> split in .a/.lib files (static libraries or archives), but mostly
> because it modularizes the Makefile nicely. These libraries tend to
> be few, map 1:1 to subdirectories, and have naturally simple
> relationships (no risk of circular dependencies).
>
> So no, to me this has rarely been a problem.
>
With repsect to external libraries, this has been my experience too.
When we do use in house libraries, we certainly have the source -
otherwise there would be no use in listing dependencies. Listing the
dependencies would be a precursor to reorganizing and/or fixing any
dependency issues found.

HTH


--
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

From: marlow.andrew on
On 12 Nov, 06:58, Jorgen Grahn <grahn+n...(a)snipabacken.se> wrote:

> > I see from the statistics that it is hardly used and I am puzzled as
> > to why. When I wrote it there was no other comparable tool, as far as
> > I could see. The problem of making sense of a large project with lots
> > of tangled C/C++ libraries comes up time and time again. Well, at
> > least it does for me. So it seemed it would fill a need. But the
> > statistics seem to say otherwise. Is this because it is not generally
> > a problem?
>
> In my limited experience, you typically use a few stable, extermal
> libraries, and no "in-house" libraries. The software itself may be
> split in .a/.lib files (static libraries or archives), but mostly
> because it modularizes the Makefile nicely. These libraries tend to
> be few, map 1:1 to subdirectories, and have naturally simple
> relationships (no risk of circular dependencies).
>
> So no, to me this has rarely been a problem.

I am very suprised to hear this. Several times I have arrived at an
organisation to find that there are loads of home grown libraries that
have to be linked into the app and that these libraries are connected
by a tangled web of dependencies that no-one understands. These are
big projects, with tens of millions of lines of code. There is far too
much code to perform a source code analysis. In some extreme cases the
organisation has even managed to lose some of the source code! Maybe
most people never work on anything that big with these problems. I
must just be lucky, I suppose :-)

-Andrew Marlow


--
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

From: Ira Baxter on
<marlow.andrew(a)googlemail.com> wrote in message
news:54ed79ad-e05e-4031-8c34-37df8e9b0052(a)1g2000prd.googlegroups.com...
> On 12 Nov, 06:58, Jorgen Grahn <grahn+n...(a)snipabacken.se> wrote:
>
> > > I see from the statistics that it is hardly used and I am puzzled as
> > > to why. When I wrote it there was no other comparable tool, as far as
> > > I could see. The problem of making sense of a large project with lots
> > > of tangled C/C++ libraries comes up time and time again. Well, at
> > > least it does for me. So it seemed it would fill a need. But the
> > > statistics seem to say otherwise. Is this because it is not generally
> > > a problem?
> >
> > In my limited experience, you typically use a few stable, extermal
> > libraries, and no "in-house" libraries. The software itself may be
> > split in .a/.lib files (static libraries or archives), but mostly
> > because it modularizes the Makefile nicely. These libraries tend to
> > be few, map 1:1 to subdirectories, and have naturally simple
> > relationships (no risk of circular dependencies).
> >
> > So no, to me this has rarely been a problem.
>
> I am very suprised to hear this. Several times I have arrived at an
> organisation to find that there are loads of home grown libraries that
> have to be linked into the app and that these libraries are connected
> by a tangled web of dependencies that no-one understands. These are
> big projects, with tens of millions of lines of code. There is far too
> much code to perform a source code analysis. In some extreme cases the
> organisation has even managed to lose some of the source code! Maybe
> most people never work on anything that big with these problems. I
> must just be lucky, I suppose :-)

It may have been the case in the past that there was too much source
code to do an analysis on small machines.
But machines have grown in capability
considerably in the last several years. Is it really the case that
there's too much code? Or just that people don't have the tools
or don't want to go through the effort?
[Lost source code doesn't improve the situation, but can be patched
around even for source analysis.]

Given that static analysis tools are becoming necessary, it would
seem to me that the mantra should be, "analyze those source code
systems no matter how big". Once you go down that route,
collecting information such as this is straightforward.

[I have a bias... we build static analyzers that do tens of millions
of lines of code.]

--
Ira Baxter, CTO
www.semanticdesigns.com


[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

From: marlow.andrew on
On 17 Nov, 07:23, "Ira Baxter" <idbax...(a)semdesigns.com> wrote:

> > These are
> > big projects, with tens of millions of lines of code. There is far too
> > much code to perform a source code analysis. In some extreme cases the
> > organisation has even managed to lose some of the source code! Maybe
> > most people never work on anything that big with these problems. I
> > must just be lucky, I suppose :-)
>
> It may have been the case in the past that there was too much source
> code to do an analysis on small machines.
> But machines have grown in capability
> considerably in the last several years. Is it really the case that
> there's too much code?

Yes. There was over 20 million LOC, some say around 30 million LOC, no-
one knows for sure how big it really is. This was for software in
development over 20 years with current manpower at 2,000 programmers.
The executable is so large that at one point the procedure code
segment was completely full (this is on AIX, which has a segmented
architecture). I am not aware of a library analyser that can cope with
20-30 million LOC. I would love to hear of one that can cope with that
sort of volume of data :-)

Regards,

Andrew Marlow



--
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

First  |  Prev  |  Next  |  Last
Pages: 1 2 3
Prev: strcpy_s vs strcpy
Next: Variadic templates