From: Daniel T. on
brangdon(a)cix.co.uk (Dave Harris) wrote:
> daniel_t(a)earthlink.net (Daniel T.) wrote (abridged):
> >
> > > Yes. It led to simple code, but that was partly because all the
> > > complexity was hidden behind an iterator.
> > > [...]
> >
> > I didn't post anymore than I did because there isn't enough context
> > to tell what the best solution is. Even though .size() is checked
> > at each level, there may still be an invariant such that all the
> > sizes are the same.
>
> I would be interested in code that solved the same problem that the
> original code solved. Or are you agreeing that the original code was the
> best, for that problem? Can you only do better if you change the problem
> by adding extra invariants?

Am I adding extra invariants? I don't know what the invariants are, and
I have no reason to believe that the code presented solved any problem
at all.

That's part of the point I'm trying to make here. We have been handed an
incomplete solution to a nonexistent problem and you are asking me if I
could write code that "solved the same problem..." Here you go:

int main() { }

My example above has the benefit of actually being able to compile and
run. You can't do that with the original example.
From: Dave Harris on
nathancbaker(a)gmail.com (Nathan) wrote (abridged):
> > Specifically, you need to be aware that
> > data[0][0].size() may be different to data[1][0].size() or
> > data[0][1].size(), and that any of them may be zero meaning
> > that (eg) data[0][0][0] does not exist and must not be accessed.
>
> [...]
> Value_t* MyClass::findValue(const Value_t& value)
> {
> size_t xyzMax = data.size() + data[xInd].size() + data[xInd]
> [yInd].size();
> for(size_t xInd = 0; xInd < xyzMax; ++xInd)
> {
> if(data[xInd] == value)
> return &data[xInd];
> }
>
> return 0;
> }

But this does not do the same thing at all. You have done what I warned
you against, which is suppose that data[0][0].size() is the same as
data[0][1].size().

You also assume the values are contiguous in memory, which won't be true
if the data[0] is a std::vector (and it is almost certainly some kind of
data structure; given that it defines the size() method it won't be a
plain array).

And you've introduced additional bugs, for example adding the sizes when
you need to multiply them. Your later code misses out some braces so that
the loop always terminates after one iteration. You don't declare or
initialise all your variables. Frankly, you are not gaining any
credibility here.


> [...]
> So,
>
> 3 loop constructs are reduced to 1
> 4 conditionals are reduced to 2
> 2 exit points are reduced to 1
>
> Simple, isn't it?

When you can solve the original problem correctly, we can talk about
whether your solution is simpler.

-- Dave Harris, Nottingham, UK.
From: Daniel T. on
brangdon(a)cix.co.uk (Dave Harris) wrote:
> daniel_t(a)earthlink.net (Daniel T.) wrote (abridged):
> >
> > > I would be interested in code that solved the same problem that
> > > the original code solved. Or are you agreeing that the original
> > > code was the best, for that problem? Can you only do better if you
> > > change the problem by adding extra invariants?
> >
> > Am I adding extra invariants? I don't know what the invariants are,
> > and I have no reason to believe that the code presented solved any
> > problem at all.
> >
> > That's part of the point I'm trying to make here. We have been
> > handed an incomplete solution to a nonexistent problem and you are
> > asking me if I could write code that "solved the same problem..."
> > Here you go:
> >
> > int main() { }
> >
> > My example above has the benefit of actually being able to compile
> > and run. You can't do that with the original example.
>
> The original code was:
>
> Value_t* MyClass::findValue(const Value_t& value)
> {
> for(size_t xInd = 0; xInd < data.size(); ++xInd)
> for(size_t yInd = 0; yInd < data[xInd].size(); ++yInd)
> for(size_t zInd = 0; zInd < data[xInd][yInd].size(); ++zInd)
> {
> if(data[xInd][yInd][zInd] == value)
> return &data[xInd][yInd][zInd];
> }
>
> return 0;
> }
>
> If you are truly saying you don't understand what problem this code is
> trying to solve, to the point where you claim "int main() { }" is
> solving the same problem, then I have trouble believing you are
> debating in good faith. How can your code comprehension skills be so
> poor?
>
> If it helps, I'll add some code.
>
> #include <vector>
>
> using namespace std;
> typedef int Value_t;
> typedef vector<Value_t> z_vec;
> typedef vector<z_vec> y_vec;
> typedef vector<y_vec> x_vec;
>
> struct MyClass {
> x_vec data;
>
> void test();
> Value_t *findValue( const Value_t& value);
> };
>
> int main() {
> MyClass().test();
> }
>
> void MyClass::test() {
> findValue( 0 );
> }
>
> Putting this in front of the original code is enough to make it
> compile. Any competent C++ programmer could have come up with it. I'll
> let you write some code to initialise MyClass::data from stdin
> yourself. If you are truly incapable of doing that, then I have to
> question whether your opinions are worth any weight at all. (I don't
> need to see the code. I just want you to stop being obtuse, when you
> clearly know better.)
>
> There are other ways to define data etc so that the original code
> compiles. It shouldn't be necessary to assume more about the problem
> than what the original code gave us, which was basically operator[]
> and size() as used. That was enough for the original code. Alternative
> solutions ought to work with any reasonable definitions, including
> this one.
>
> You shouldn't assume that the data are contiguous, or that someone
> else will write a convenient magic iterator for you.

Once again, you pile on a bunch of code without specifying the problem
it's supposed to solve, then you challenge me to provide code that will
better "solve the problem." I ask again, what problem? Why shouldn't I
assume that the data are contiguous? In what context is a short-circuit
linear search appropriate through a 3-D ragged arrayed container? Just
because that's the way you want it so that you can stand on your soapbox
and say, "see I told you so"?

What is it you are trying to prove? If all you want is for me to say
that sometimes multiple returns are the best solution... I've already
done that, otherwise explain yourself better.
From: wolfgang kern on

Nathan Baker wrote:

>> Ok Nate, we had enough discussions on this matter since HLLs
>> entered our progamming world ...
>> We better give up arguing and let the 'faster' programmers
>> be proud of their 'maintainable/foolproof-readable' sources
>> which are awful detours with "abstraction layers" while the
>> few hardware freaks like me work on "really existing things" :)

> The CPU experiences a nightmare while executing HLL code.
> Perhaps there is an instructive way for us to demonstrate this fact?

Even we could try this one more time, I daubt that
'fundamental' HLL-coders may listen at all.
The believe in their holy compilers seem to be very strong :)

I think demonstration of the bloated redundance isn't required,
disassemble whatsoever you find (windoze+lunix/app+sys) and
see as of immediate all the weird detours created by 'a tool'.

The few programmers who know both worlds may be aware and
create lesser bloated and faster code even with HLL.

__
wolfgang
some believe in god, many in gold, a few in logic,
and HLL-coders in their compiler.



From: Alexei A. Frounze on
On May 11, 12:55 am, "wolfgang kern" <nowh...(a)never.at> wrote:
> Nathan Baker wrote:
> >> Ok Nate, we had enough discussions on this matter since HLLs
> >> entered our progamming world ...
> >> We better give up arguing and let the 'faster' programmers
> >> be proud of their 'maintainable/foolproof-readable' sources
> >> which are awful detours with "abstraction layers" while the
> >> few hardware freaks like me work on "really existing things" :)
> > The CPU experiences a nightmare while executing HLL code.
> > Perhaps there is an instructive way for us to demonstrate this fact?
>
> Even we could try this one more time, I daubt that
> 'fundamental' HLL-coders may listen at all.
> The believe in their holy compilers seem to be very strong :)
>
> I think demonstration of the bloated redundance isn't required,
> disassemble whatsoever you find (windoze+lunix/app+sys) and
> see as of immediate all the weird detours created by 'a tool'.
>
> The few programmers who know both worlds may be aware and
> create lesser bloated and faster code even with HLL.

There are several reasons for what you call compiler-produced bloat:
- copy-pasted code at the source level (can happen in asm too)
- duplicate functionality and data in different parts of big
applications (can happen in asm too)
- overly complicated code (can happen in asm too)
- dead code and data (can happen in asm too)
- error handling code (you probably need it in asm too)
- use of macros, inlining and templates (macros exist in asm too)
- calling convention support code for interoperation, variable
argument subroutines, exception handling and debugging (you may need
or want some of them in asm too)
- code alignment (you may need it in asm too)
- inadequately chosen optimization switches or speed preference over
size
- global variables, including not tightly packed structures (you may
want data alignment in asm too)

So, some of it comes from bad or complex source code, some from the
fact that applications are made from many different parts done
independently by different people, some is dictated by the overall API
design and some from how the compiler and linker are used. Then again,
often times quickly making software is more important than making it
fast or small or both. Depends on the business model, which you may be
unable to change other than by quitting your employer and finding a
"better" one or starting your own company or simply going self-
employed. I'm just explaining how this bloat is possible and expected.

Alex