From: Chris Sonnack on
topmind writes:

>>>> It appears that because your experience is limited to relational
>>>> databases,
>>>
>>> Limited to? RDBMS are behind some of the largest applications around.
>>
>> Your comprehension of basic logic is abysmal.
>>
>> First, if you lived in China (one of the biggest countries) all your
>> life, your experience would be *limited* to living in one place.
>>
>> Second, the size and power of RDBMSes has no connection to YOUR
>> experience.
>
> I read your statement as an implication that applications that use
> DB's were small and/or simple.

The original statement was not mine, but I understand exactly what he
meant, which was that you appear to have worked in a database-centric
environment doing primarily report-based work. You don't seem to have
denied this, so I'm assuming he was basically correct.

From *our* conversation you don't appear to be an experienced programmer
to me, or at least not highly experienced, and you don't appear to have
much in the way of OOD skills. (If I'm wrong, I apologize, but that's
how it seems.)

This has nothing to do with DBSes themselves. Databases are used in
most trivial of applications as well as in the most complicated. In
and of themselves, databases say nothing about the complexity of the
application.


>>> I don't understand why you suggest that reporting systems are
>>> inherently "less complex" than whatever you have in mind.
>>
>> Because it's true. It's understandable you don't realize that, because
>> you've lived in "China" all your professional life (it seems).
>
> You are wrong. Like I said elsewhere, I have worked on many kinds of
> business applications, not just reporting, and one is not inherently
> more simple than the other. Can you present evidence of this alleged
> inherent simplicity?

Certainly. Business applications (and I've BEEN a corporate programmer
for two decades and have extensively used RDBSes) are highly, if not
100%, deterministic. They (biz apps) tend to involve data storage and
retrieval and data reporting. They tend to be based on a *relatively*
small set of business rules with a relatively small set of permutations.

For truly complex software, consider voice or image recognition, or the
software used by physicists to delve into quantum physics, or digital
signal processing, or advanced mathematical programming, of the computer
generated imaging software used in the movies.

For that matter, many compilers are vastly more complicated than most
business apps.

How do I know this is more complex than biz apps? Because, as I've
said, I've been *doing* biz apps for 20 years, and at least some of the
software fields I listed above are well beyond me.

Trust me, especially with the frameworks and tools available today,
biz apps are easy.

> And even if it was true, at least you seem to agree that it is an area
> where polymorphism does not help much.

No, I never agreed with that. I used p/m quite a lot, actually.

> Can I at least get you to admit
> that? An admission that polymorphism is NOT a universal,
> general-purpose technique is an important revelation.

If you mean it's not useful in every program, of course I agree.
Very little is useful in *every* program.

--
|_ CJSonnack <Chris(a)Sonnack.com> _____________| How's my programming? |
|_ http://www.Sonnack.com/ ___________________| Call: 1-800-DEV-NULL |
|_____________________________________________|_______________________|
From: Chris Sonnack on
topmind writes:

>>> I have agreed that *people* naturally relate to trees.
>>
>> First, why do you think that is?
>
> That is a very good question that I cannot fully answer. Ask the chief
> engineer(s) of the brain.

[grin] They don't seem available for comment. We'll just have to guess.

> I suspect in part it is because trees can be well-represented visually
> such that the primary pattern is easy to discern. I say "primary"
> because trees often visually over-emphasis one or few factors at the
> expense of others.

You make a lot of statements you don't support with examples. Can you
provide an example of this happening?

I think the reason is fairly simple. But I think we need to draw a
distinction between representing DATA in a tree and representing some
taxonomy as a tree. I (and others) have agreed that large taxonomies
(unless the environment is very stable and clearcut) can be a problem.
(I'm not sure that tables necessarily fix those problems, however, the
problems come more from the difficulty of classifying things, IMO.)

However, when it comes to representing certain *types* of datasets, I
think that trees are the most appropriate form.

Here we seem to be talking about tree-shaped *data*, each datum being
a step in a larger task. It was entirely natural for you to break it
down as an outline (aka tree) because that's exactly the shape of the
data.

People find this useful and natural for the simple reason that it allows
you to focus on the level of detail desired at the moment. That's the
whole point of an outline.

File systems are hierachical for the same reason. It allows you to
partition your data (files) into useful categories. It also allows you
to perform operations on a sub-set easily without needing to access or
filter the rest.

Companies and the military are hierarchical for a similar reason: the
"higher ups" deal with the big picture, the "low downs" deal with the
details.

> Sets can be used for visual inspection also, but it takes some training
> and experience to get comfortable with them. Part of this is because
> one must learn how to "query" sets from different perspectives. There
> generally is no one right view (at least not on 2D displays) that
> readily shows the pattern(s). One has to get comfortable with query
> tools and techniques.

Of course. Sets are raw, unstructured data.

And reading raw data can fail to carry real meaning, hence the usefulness
of charts and graphs. Very often, when dealing with a tabular dataset,
I find myself quite surprised to see what the data is really saying once
I start mining it with graphs and queries.

>> Second, *people* create programs. Therefore, obviously, the tree must
>> be one of the natural models.
>
> Do you mean natural to the human mind or natural to the universe such
> that all intelligent space aliens will also dig them?

Tree structures are a (universally) natural data type, so I do believe
all intelligent minds will discover and use them.

> Being natural does not necessarily mean "good". For example, it is
> natural for humans to like the taste of sugar, fats, and fiber-free
> starch. However, too much of these often make us unhealthy.

Too much, yes. But without them, we perish.

> Similarly, I agree that trees provide a kind of *instant gratification*.
> It is the longer-term or up-scaling where they tend to bite back. Thus,
> sets are for the more educated, enlightened mind in my opinion.

I understand that's your opinion. It's unsubstantiated in mine.


>> First, as I've already pointed out, cross-branch links don't make a
>> tree not a tree. They just make it not certain types of trees.
>
> "Type of tree"? Please clarify.

There are binary trees and n-ary trees. There are trees that are
balanced and unbalanced. There are trees that require all nodes to
be unique and trees that allow nodes to be repeated. There are
trees that allow nodes to reference each other (consider links in
a Unix filesystem). There are trees where the branches are allowed
to reconnect.

A tree is really any structure with a "root" and child nodes.

> If you print out the code on a big sheet of paper and draw the lines
> between the subroutine references, you will see that it is *not* a
> tree.

In most programs of my experience (30 years, dozens of languages), there
is a fairly strong tree shape to the relationship between functions.
You have your root--the code entry point--and a set of nodes that, in
a well-written program--tend to proceed from high level nodes to low
level nodes.

A fellow I used to work with wrote a program that, given the source
files for any C program CREATED the tree of routines and calls. The
only thing you really have to be aware of is recursion, and that pretty
much just requires not repeating an existing node (or in some call
graphs I've seen, repeating it only once as a leaf).

> Now, *any* graph can indeed be represented as a tree if you are willing
> to display inter-branch references as duplicate nodes, which is what
> the multiple subroutine calls essentially are.

Or (as this fellow did) as branches that refer back to existing nodes.

And consider this: the call path of a program executing (in a single
thread environment) is 100% tree-shaped (with recursion caveat). That
is, if you graphed it from start to finish, you'd end up with a huge,
perfect tree.

And, yes, some nodes might be repeated, but that doesn't make it not
a perfect tree (by perfect I mean no nodes cross-link). The parse tree
created by a parser repeats nodes--for example, in most parse trees,
there are probably a LOT of "if/else" nodes.


>> Second, you don't really algorithm, you mean program. Algorithms
>> typically ARE very hierarchal ever since structured programming was
>> invented. Smaller blocks within larger blocks, just about every
>> algorithm is structured that way.
>
> On a small scale, yes. On a bigger scale it is not hierarchical, per
> above. Many programs are generally hundreds of small trees joined by
> lots of references that bust tree-ness.

Well-written programs ARE usually tree-ish in my experience. High level
routines call low level routines. I'm sure you've heard the terms
"top down" and "bottom up". These refer to the tree-ish-ness of program
analysis and design.

> Picture drawing hundreds of small trees randomly scattered on a large
> piece of paper. Now randomly draw thousands of lines between various
> nodes of the different trees....

But it's NOT random. Low level routines don't call high level routines.


>> You can label them a "lie" if you like, but they appear to reflect the
>> truth of very many situations to me.
>
> Well, every abstraction is probably a lie to some extent.

The term "lie" is pejorative and, I think, misleading. A lie is told with
the *intent* to deceive. Abstractions are told with the intent to get at
a truth. If you really feel tree-shaped abstractions are incorrect, a
better term might be, well, "incorrect". (-:


>> Clearly most of the human race would seem to agree, since you admit
>> that "most people would naturally produce a tree."
>
> Because most people are naive and go with instant gratification and/or
> the comfortable. Sets are admittedly not immediately comfortable, but
> they allow one to shift up a level of complexity and change management.
> Enlightenment comes with a price. That's life.

Hmmm, the "everyone else is dumb but me" line is so often the mark of a
kook that I recommend you be careful about it. And, for my money, you're
just plain wrong.

Sets are raw, unstructured data. You've agreed that making sense of them
requires tools that add, or highlight, structure.

That's exactly what hierarchies do. One could just as easily claim that
enlightenment involves embracing all forms of tools that allow you to
mine datasets for their value. One might also argue that the naivete is
in failing to recognize the value of higher data structures.

--
|_ CJSonnack <Chris(a)Sonnack.com> _____________| How's my programming? |
|_ http://www.Sonnack.com/ ___________________| Call: 1-800-DEV-NULL |
|_____________________________________________|_______________________|
From: topmind on
> This has nothing to do with DBSes themselves. Databases are used in
> most trivial of applications as well as in the most complicated. In
> and of themselves, databases say nothing about the complexity of the
> application.

Good. That is one issue out of the way.

>
>
> >>> I don't understand why you suggest that reporting systems are
> >>> inherently "less complex" than whatever you have in mind.
> >>
> >> Because it's true. It's understandable you don't realize that, because
> >> you've lived in "China" all your professional life (it seems).
> >
> > You are wrong. Like I said elsewhere, I have worked on many kinds of
> > business applications, not just reporting, and one is not inherently
> > more simple than the other. Can you present evidence of this alleged
> > inherent simplicity?
>
> Certainly. Business applications (and I've BEEN a corporate programmer
> for two decades and have extensively used RDBSes) are highly, if not
> 100%, deterministic. They (biz apps) tend to involve data storage and
> retrieval and data reporting. They tend to be based on a *relatively*
> small set of business rules with a relatively small set of permutations.

Not the ones I encounter. I agree that if things were cleaned up,
normalized, etc., things would be easier, but interrelationships
between things can get very tangly, defying rational abstraction.
Many of the domains you list below are based more or less on
fixed algorithms or the laws of physics. Thus, they tend to
be predictable along certain lines. God does not change the
laws of physics very often.

>
> For truly complex software, consider voice or image recognition, or the
> software used by physicists to delve into quantum physics, or digital
> signal processing, or advanced mathematical programming,
> of the computer generated imaging software used in the movies.
>
> For that matter, many compilers are vastly more complicated than most
> business apps.

This are not necessarily more "complex", it is just that it
takes longer study of the domain to understand them. If
you have been writing 3D movie software all your life,
then you understand the domain and related math and it
is second-hand.

I will agree that the general concept of a given biz
app is usually easier to understand than the things
you listed, but once one is past that difference,
dealing with software change is pretty much a matter
of how to organize things to be able to change them
without too much pain.

>
> How do I know this is more complex than biz apps? Because, as I've
> said, I've been *doing* biz apps for 20 years, and at least some of the
> software fields I listed above are well beyond me.

You seem to be confusing "easy to relate to" and "easy to
organize". These can be very different things.

>
> Trust me, especially with the frameworks and tools available today,
> biz apps are easy.

No, the tools just mean one is now focusing on the sticky parts
instead of the easy but repetitous parts.
In the domains you listed above
if there are no tools for them, then one spends more time
on grunty stuff.

A carpenter with power tools still has to do as much thinking
as those before power tools, perhaps more because lapses
can result in bigger boo boo's. Automating grunt work
does not necessarily mean your job simpler, but rather
that you spend more time on less repetitious or tedius
issues.

The IT department might cut the department in half after
a grand tool comes along, and now half the people have
to do the same amount of net end product. You might
spend your day doing less grunt stuff, but you are
still doing stuff.


>
> > And even if it was true, at least you seem to agree that it is an area
> > where polymorphism does not help much.
>
> No, I never agreed with that. I used p/m quite a lot, actually.

Then why bring up other domains?

>
> > Can I at least get you to admit
> > that? An admission that polymorphism is NOT a universal,
> > general-purpose technique is an important revelation.
>
> If you mean it's not useful in every program, of course I agree.
> Very little is useful in *every* program.
>

Well, then let's narrow down where it is useful and where it is not.

-T-

From: Chris Sonnack on
topmind writes:

>> Certainly. Business applications (and I've BEEN a corporate programmer
>> for two decades and have extensively used RDBSes) are highly, if not
>> 100%, deterministic. They (biz apps) tend to involve data storage and
>> retrieval and data reporting. They tend to be based on a *relatively*
>> small set of business rules with a relatively small set of permutations.
>
> Not the ones I encounter.

You don't appear to have the experience to make a valid comparison.
Why don't you describe the most complex business situation you've
encountered, and we'll see how it compares to some situations we've
encountered.

> ...but interrelationships between things can get very tangly, defying
> rational abstraction.

The inter-relationships between things in quantum physics and various
mathematical constructs is *beyond* our understanding today. No matter
how "tangly" business relationships are, they're a walk in the park by
comparison.

> Many of the domains you list below are based more or less on
> fixed algorithms or the laws of physics.

"Fixed algorithms"? In many of these fields, the algorithms have to
be invented as they go along.

(True story: during production of the animated film, THE INCREDIBLES,
they were having a heck of a time designing the software that did the
simulation of Violet's long hair (the outtakes are hysterical). At
one point the chief software designer told the Producer that, "Hey,
long hair is *theoretical*!" "What do you mean, theoretical", replied
the Producer. "We open in nine months!" If you've seen the film,
you know they did solve it, but they had to *invent* it from scratch.)

In *most* new feature animated films, the software is *invented* to
meet the needs and is vastly more complex than any business software.
For instance, each new Pixar film adds a new trick. TOY STORY was
one of the first good 3D animated films. TOY STORY 2 added a much
better shading ability. MONSTERS, INC added hair and fur. FINDING
NEMO added water simulation and the ability to simulate many more
objects in a scene. These are all "ground breaking" applications
that were researched and developed. That just doesn't happen very
often in biz apps--existing tools, technologies and understandings
meet the need *most* of the time.

There is a difference between "convoluted" (which biz apps can be)
and "complex" (which very few biz apps are).

>> For truly complex software, consider voice or image recognition,
>> or the software used by physicists to delve into quantum physics,
>> or digital signal processing, or advanced mathematical programming,
>> of the computer generated imaging software used in the movies.
>>
>> For that matter, many compilers are vastly more complicated than most
>> business apps.
>
> This are not necessarily more "complex", it is just that it
> takes longer study of the domain to understand them.

ROFL. And why do you think that is? You don't suppose it might be
that the field is more complex than any business application?

> If you have been writing 3D movie software all your life, then you
> understand the domain and related math and it is second-hand.

Don't take my word for it, seek out a long time professional in that
field and ask them.

> I will agree that the general concept of a given biz app is usually
> easier to understand than the things you listed,...

No kidding!

> ...but once one is past that difference, dealing with software change
> is pretty much a matter of how to organize things to be able to change
> them without too much pain.

That's true to a great extent. But this bit was about the relative
complexity of biz apps, not about change management. You were protesting
that biz apps were just as complex...I hope we've cleared that up now.


>> Trust me, especially with the frameworks and tools available today,
>> biz apps are easy.
>
> No, the tools just mean one is now focusing on the sticky parts
> instead of the easy but repetitous parts.

Sometimes. But also sometimes they do the really hard bits so you
don't have to. The *reason* they can do this is that biz apps are
simple and tend to be fairly similar in a given domain. Thus a
framework can make assumptions about the nature of the business
and stand a good chance of being right.

Example: in about ten minutes, I can design a query for a remote DB
and embed that query in Excel. I can create charts based on that
query and then link to those charts in a PowerPoint presentation.
The end result is a presentation I can give a manager that merely
opening refreshes so s/he is viewing current production data in a
nice set of charts.

I can do this, because biz apps are simple enough that the tools
can anticipate many of my needs.


>>> Can I at least get you to admit that? An admission that
>>> polymorphism is NOT a universal, general-purpose technique
>>> is an important revelation.

(Perhaps it is to you, but not to anyone who understands it.)

>> If you mean it's not useful in every program, of course I agree.
>> Very little is useful in *every* program.
>
> Well, then let's narrow down where it is useful and where it is not.

Any time the system can be designed to use an abstraction with a
generic interface is a place where polymorphism is useful.

* Handing input and output objects to functions.
* Handing a processing "callback" object to a function.
* Processing similar but different messages or objects.
* Implementing various toolkits (e.g. JavaBeans concept).


--
|_ CJSonnack <Chris(a)Sonnack.com> _____________| How's my programming? |
|_ http://www.Sonnack.com/ ___________________| Call: 1-800-DEV-NULL |
|_____________________________________________|_______________________|
From: topmind on


Chris Sonnack wrote:
> topmind writes:
>
> >> Certainly. Business applications (and I've BEEN a corporate programmer
> >> for two decades and have extensively used RDBSes) are highly, if not
> >> 100%, deterministic. They (biz apps) tend to involve data storage and
> >> retrieval and data reporting. They tend to be based on a *relatively*
> >> small set of business rules with a relatively small set of permutations.
> >
> > Not the ones I encounter.
>
> You don't appear to have the experience to make a valid comparison.
> Why don't you describe the most complex business situation you've
> encountered, and we'll see how it compares to some situations we've
> encountered.


It would take to long to describe the situations. However, an
interesting excercise is the "payroll example" at:

http://www.geocities.com/tablizer/struc.htm


>
> > ...but interrelationships between things can get very tangly, defying
> > rational abstraction.
>
> The inter-relationships between things in quantum physics and various
> mathematical constructs is *beyond* our understanding today. No matter
> how "tangly" business relationships are, they're a walk in the park by
> comparison.

Apparently you've never worked with the marketing department. Similar
to quantum physics, it is hard to find rational, or at least "natural",
patterns to their requests.

>
> > Many of the domains you list below are based more or less on
> > fixed algorithms or the laws of physics.
>
> "Fixed algorithms"? In many of these fields, the algorithms have to
> be invented as they go along.

How is that different from a biz app, such as finding algorithms for
compensating sales people or regional managers in a "fair" manner?

>
> (True story: during production of the animated film, THE INCREDIBLES,
> they were having a heck of a time designing the software that did the
> simulation of Violet's long hair (the outtakes are hysterical). At
> one point the chief software designer told the Producer that, "Hey,
> long hair is *theoretical*!" "What do you mean, theoretical", replied
> the Producer. "We open in nine months!" If you've seen the film,
> you know they did solve it, but they had to *invent* it from scratch.)
>
> In *most* new feature animated films, the software is *invented* to
> meet the needs and is vastly more complex than any business software.
> For instance, each new Pixar film adds a new trick. TOY STORY was
> one of the first good 3D animated films. TOY STORY 2 added a much
> better shading ability. MONSTERS, INC added hair and fur. FINDING
> NEMO added water simulation and the ability to simulate many more
> objects in a scene. These are all "ground breaking" applications
> that were researched and developed. That just doesn't happen very
> often in biz apps--existing tools, technologies and understandings
> meet the need *most* of the time.
>
> There is a difference between "convoluted" (which biz apps can be)
> and "complex" (which very few biz apps are).

Well, I still don't have a clear definition of the difference between
the two. I suppose "convoluted" could mean that it is unnecessarily
complex; in other words, ruined by sloppy humans. But, maybe there are
simpler algorithms for the 3D movie software, its just that nobody has
discovered it yet.

Note that I majored in "graphics/CAD" and wrote 3D scene generation
software in Pascal as a student assignment. A good portion of the code
was merely cross-referencing and looking up stuff such as "which other
polygons intersect me". A lot of it could be reduced to mere database
operations.

>
> >> For truly complex software, consider voice or image recognition,
> >> or the software used by physicists to delve into quantum physics,
> >> or digital signal processing, or advanced mathematical programming,
> >> of the computer generated imaging software used in the movies.
> >>
> >> For that matter, many compilers are vastly more complicated than most
> >> business apps.


A lot of compiler work is also merely cross-referencing stuff. Once it
is parsed and in data structures, the rest is mostly just
collectioned-oriented operations of symbol tables, precidence tables,
assembler/machine-code snippet lookups, etc. Hell, even much of the
Yacc/Lex compiler kits are data-driven (via text files containing
attributes).

> >
> > This are not necessarily more "complex", it is just that it
> > takes longer study of the domain to understand them.
>
> ROFL. And why do you think that is? You don't suppose it might be
> that the field is more complex than any business application?


Again, I have not seen any clear metric to measure "complex". The
number of lines of code in a payroll or tax system is not necessarily
less than in a 3D movie generator. So far they are just different
flavors of complexity, NOT different "levels".


>
> > If you have been writing 3D movie software all your life, then you
> > understand the domain and related math and it is second-hand.
>
> Don't take my word for it, seek out a long time professional in that
> field and ask them.

Everybody thinks their field is the most complex. Human ego usually
gets in the way of good answers.

>
> > I will agree that the general concept of a given biz app is usually
> > easier to understand than the things you listed,...
>
> No kidding!

But that is not necessarily related to "complexity". What I meant was
that it takes less training to understand the general concepts of a
given business. That does NOT mean the business is simple.

>
> > ...but once one is past that difference, dealing with software change
> > is pretty much a matter of how to organize things to be able to change
> > them without too much pain.
>
> That's true to a great extent. But this bit was about the relative
> complexity of biz apps, not about change management. You were protesting
> that biz apps were just as complex...I hope we've cleared that up now.
>

Nope.

>
> >> Trust me, especially with the frameworks and tools available today,
> >> biz apps are easy.
> >
> > No, the tools just mean one is now focusing on the sticky parts
> > instead of the easy but repetitous parts.
>
> Sometimes. But also sometimes they do the really hard bits so you
> don't have to. The *reason* they can do this is that biz apps are
> simple and tend to be fairly similar in a given domain. Thus a
> framework can make assumptions about the nature of the business
> and stand a good chance of being right.

I don't know about that. Companies have found a hard time
making generic business applications software. Things like
SAP take boatloads of SAP experts to tune the sucker, almost
more staff than building it from scratch.

Also, I won't necessarily dispute that polymorphism might
be great for making a spreadsheet (Excel clone) from
scratch, but that is not what those in my domain
do for a living (although I once made a FoxPro app
that executed formulas stored in tables, which is
sort of a hand-rolled spreadsheet.)

>
> Example: in about ten minutes, I can design a query for a remote DB
> and embed that query in Excel. I can create charts based on that
> query and then link to those charts in a PowerPoint presentation.
> The end result is a presentation I can give a manager that merely
> opening refreshes so s/he is viewing current production data in a
> nice set of charts.

Until you need complex cross-referencing or other fancy calculations
that bring the Excel to it's knees. Excel has a Turing-Complete
language in it; thus it can implement ANY defined algorithm, even 3D
movies graphics (if you wait long enough for the output). So, I am not
sure
you have made a point.

>
> I can do this, because biz apps are simple enough that the tools
> can anticipate many of my needs.

No, because it has a TC language built in.

>
>
> >>> Can I at least get you to admit that? An admission that
> >>> polymorphism is NOT a universal, general-purpose technique
> >>> is an important revelation.
>
> (Perhaps it is to you, but not to anyone who understands it.)

I guess I am just dumb.

>
> >> If you mean it's not useful in every program, of course I agree.
> >> Very little is useful in *every* program.
> >
> > Well, then let's narrow down where it is useful and where it is not.
>
> Any time the system can be designed to use an abstraction with a
> generic interface is a place where polymorphism is useful.
>
> * Handing input and output objects to functions.
> * Handing a processing "callback" object to a function.
> * Processing similar but different messages or objects.

This I disagree with. The real-world patterns of
variations on a theme do NOT fit poly in general.
Classification philosophers have studied such far
more carefully than you and I and generally
agree with me on this.

> * Implementing various toolkits (e.g. JavaBeans concept).

Kind of vague. What is not a "concept"? (JavaBeans suck BTW)

>
>

-T-

First  |  Prev  |  Next  |  Last
Pages: 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73
Next: Use Case Point Estimation