From: Chris Sonnack on
topmind writes:

>> It appears that because your experience is limited to relational
>> databases,
>
> Limited to? RDBMS are behind some of the largest applications around.

Your comprehension of basic logic is abysmal.

First, if you lived in China (one of the biggest countries) all your
life, your experience would be *limited* to living in one place.

Second, the size and power of RDBMSes has no connection to YOUR experience.


> I don't understand why you suggest that reporting systems are
> inherently "less complex" than whatever you have in mind.

Because it's true. It's understandable you don't realize that, because
you've lived in "China" all your professional life (it seems).


--
|_ CJSonnack <Chris(a)Sonnack.com> _____________| How's my programming? |
|_ http://www.Sonnack.com/ ___________________| Call: 1-800-DEV-NULL |
|_____________________________________________|_______________________|
From: Chris Sonnack on
topmind writes:

>>> Copy-and-paste actually *reduces* coupling because it lets things be
>>> independent, for example. Thus, if reducing coupling is always good,
>>> then copy-and-paste is always good.
>>
>> It may appear to reduce coupling, but as has been said, it creates an
>> invisible (to the source) web of coupling that needs to be documented
>> and maintained by the developer.
>
> Yes, but that is a nebulous form of "coupling".

Which is the worst kind! Much harder to maintain.

> Robert Martin implied that "coupling" was sure-shot metric that by
> itself would guide decisions.

No, that was your willful misinterpretation of what he said. There are
no sure-fire metrics, the Universal Answer to any computing question is
always, "It Depends."


> Note that I was not promoting copy-and-paste above. I was only
> exploring it's relationship to Robert's view of "coupling".

Coupling is usually to be avoided as much as possible. Invisible coupling
that depends on separate documentation or programmer memory is the worst
kind. There's nothing inconsistant with what Robert said.

--
|_ CJSonnack <Chris(a)Sonnack.com> _____________| How's my programming? |
|_ http://www.Sonnack.com/ ___________________| Call: 1-800-DEV-NULL |
|_____________________________________________|_______________________|
From: Chris Sonnack on
topmind writes:

>>> [...tree snipped...]
>>>
>>> Most people would produce a fiarly similar hierarchy without even
>>> seeing this one.
>>
>> LOL! You mean to say it is so naturally tree-shaped that most people
>> would naturally produce a tree?
>
> I have agreed that *people* naturally relate to trees.

First, why do you think that is?

Second, *people* create programs. Therefore, obviously, the tree must be
one of the natural models.

> Thus, they tend to produce algorithms that have a tree-shaped division
> of tasks. It is not that tasks are inherently are tree-shaped, it is
> that people are more comfortable using trees to describe them.

And why do you think that is if not because hierarchical structure is
both natural and *useful*. (Things that are not useful do not tend to
survive that long.)

> However, on a larger scale algorithms are not pure trees because
> usually named tasks (subroutines) eventually come in to play as we
> scale up. Subroutines break the tree because they are cross-branch
> links.

First, as I've already pointed out, cross-branch links don't make a
tree not a tree. They just make it not certain types of trees.

Second, you don't really algorithm, you mean program. Algorithms
typically ARE very hierarchal ever since structured programming was
invented. Smaller blocks within larger blocks, just about every
algorithm is structured that way.

> IOW, on a small scale or *informal* discussions (which is what the
> above is), trees are a decent useful lie.

You can label them a "lie" if you like, but they appear to reflect the
truth of very many situations to me.

Clearly most of the human race would seem to agree, since you admit
that "most people would naturally produce a tree."

--
|_ CJSonnack <Chris(a)Sonnack.com> _____________| How's my programming? |
|_ http://www.Sonnack.com/ ___________________| Call: 1-800-DEV-NULL |
|_____________________________________________|_______________________|
From: topmind on
>
> >>> [...tree snipped...]

It got what it deserves :-)

> >>>
> >>> Most people would produce a fiarly similar hierarchy without even
> >>> seeing this one.
> >>
> >> LOL! You mean to say it is so naturally tree-shaped that most people
> >> would naturally produce a tree?
> >
> > I have agreed that *people* naturally relate to trees.
>
> First, why do you think that is?

That is a very good question that I cannot fully answer. Ask the chief
engineer(s) of the brain.

I suspect in part it is because trees can be well-represented visually
such that the primary pattern is easy to discern. I say "primary"
because trees often visually over-emphasis one or few factors at the
expense of others.

Sets can be used for visual inspection also, but it takes some training
and experience to get comfortable with them. Part of this is because
one must learn how to "query" sets from different perspectives. There
generally is no one right view (at least not on 2D displays) that
readily shows the pattern(s). One has to get comfortable with query
tools and techniques.

>
> Second, *people* create programs. Therefore, obviously, the tree must be
> one of the natural models.


Do you mean natural to the human mind or natural to the universe such
that all intelligent space aliens will also dig them?

Being natural does not necessarily mean "good". For example, it is
natural for humans to like the taste of sugar, fats, and fiber-free
starch. However, too much of these often make us unhealthy.

Similarly, I agree that trees provide a kind of *instant
gratification*. It is the longer-term or up-scaling where they tend to
bite back. Thus, sets are for the more educated, enlightened mind in my
opinion. Sets are health-food for software design.


>
> > Thus, they tend to produce algorithms that have a tree-shaped division
> > of tasks. It is not that tasks are inherently are tree-shaped, it is
> > that people are more comfortable using trees to describe them.
>
> And why do you think that is if not because hierarchical structure is
> both natural and *useful*.


See the note above about instant gratification.

> (Things that are not useful do not tend to
> survive that long.)

Maybe tree-lovers will become extinct in a few thousand years, to
be replaced by the superior HomoSETius :-)


>
> > However, on a larger scale algorithms are not pure trees because
> > usually named tasks (subroutines) eventually come in to play as we
> > scale up. Subroutines break the tree because they are cross-branch
> > links.
>
> First, as I've already pointed out, cross-branch links don't make a
> tree not a tree. They just make it not certain types of trees.

"Type of tree"? Please clarify. If you print out the code on a big
sheet of paper and draw the lines between the subroutine references,
you will see that it is *not* a tree.

Now, *any* graph can indeed be represented as a tree if you are willing
to display inter-branch references as duplicate nodes, which is what
the multiple subroutine calls essentially are.

>
> Second, you don't really algorithm, you mean program. Algorithms
> typically ARE very hierarchal ever since structured programming was
> invented. Smaller blocks within larger blocks, just about every
> algorithm is structured that way.


On a small scale, yes. On a bigger scale it is not hierarchical, per
above. Many programs are generally hundreds of small trees joined by
lots of references that bust tree-ness. Picture drawing hundreds of
small trees randomly scattered on a large piece of paper. Now randomly
draw thousands of lines between various nodes of the different trees to
either the same given tree or other trees. A big bowl of spaghetti
mixed with broccoli! Quite tasty and healthy to boot.


>
> > IOW, on a small scale or *informal* discussions (which is what the
> > above is), trees are a decent useful lie.
>
> You can label them a "lie" if you like, but they appear to reflect the
> truth of very many situations to me.

Well, every abstraction is probably a lie to some extent. Some are just
better lies than others. It is only a "truth" if you constantly
refactor them for each new combination that does not cleanly fit a
tree, or live with duplicate nodes.

>
> Clearly most of the human race would seem to agree, since you admit
> that "most people would naturally produce a tree."

Because most people are naive and go with instant gratification and/or
the comfortable. Sets are admittedly not immediately comfortable, but
they allow one to shift up a level of complexity and change management.
Enlightenment comes with a price. That's life.

-T-

From: topmind on

>
> >> It appears that because your experience is limited to relational
> >> databases,
> >
> > Limited to? RDBMS are behind some of the largest applications around.
>
> Your comprehension of basic logic is abysmal.
>
> First, if you lived in China (one of the biggest countries) all your
> life, your experience would be *limited* to living in one place.
>
> Second, the size and power of RDBMSes has no connection to YOUR experience.


I read your statement as an implication that applications that use DB's
were small and/or simple. If that is not what you meant, then please
clarify.

Further, the places where RDB's don't do well are mostly related to
limited DB implimentations, limited device resources (such as small
embedded), and precision timing requirements (auto-GC tends to make
DB's timing somewhat unpredictable). It is rarely because of a lack of
ability to organize and manage information and software.


>
> > I don't understand why you suggest that reporting systems are
> > inherently "less complex" than whatever you have in mind.
>
> Because it's true. It's understandable you don't realize that, because
> you've lived in "China" all your professional life (it seems).
>

You are wrong. Like I said elsewhere, I have worked on many kinds of
business applications, not just reporting, and one is not inherently
more simple than the other. Can you present evidence of this alleged
inherent simplicity?

And even if it was true, at least you seem to agree that it is an area
where polymorphism does not help much. Can I at least get you to admit
that? An admission that polymorphism is NOT a universal,
general-purpose technique is an important revelation.

-T-
oop.ismad.com

First  |  Prev  |  Next  |  Last
Pages: 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
Next: Use Case Point Estimation