From: Daniel Parker on
"H. S. Lahman" <h.lahman(a)verizon.net> wrote in message
news:P6Ryf.6606$C%3.3080(a)trndny03...
> Responding to Jacobs...
>
>> SQL is not an implementation. What is the difference between locking
>> yourself to SQL instead of locking yourself to Java? If you want
>> open-source, then go with PostgreSQL. What is the diff? Java ain't no
>> universal language either.
>
> Of course it's an implementation! It implements access to physical
> storage.

That literally doesn't make sense. It's like saying that a Java interface
is an implementation because it implements access to the properties of a
physical instantiation.

>
> BTW, remember that I am a translationist. When I do a UML model, I don't
> care what language the transformation engine targets in the model
> implementation. (In fact, transformation engines for R-T/E typically
> target straight C from the OOA models for performance reasons.) Thus
> every 3GL (or Assembly) represents a viable alternative implementation of
> the notion of '3GL'.
>
I think it's fair to say that SQL has, for all its faults, been enormously
successful, to the tune of a multi-multi-billion dollar industry, and that
the UML translationist approach has not. It's been over ten years since the
translationist industry has claimed to have solved the problem of 100
percent translation, but where is it, it's niche, it's nowhere. Other
technologies have arrived, e.g. the W3C XML stack and particularly XSLT
transformation, that dwarf executable UML in application. Why do you think
that is? What do you think it is about software development that makes
executable UML marginal, and other technologies like SQL important?

Regards,
Daniel Parker


From: Hasta <hasta on
> >
> > Well, there is an objective measure of the complexity of
> > 100000110000000000. It's the length of the smallest
> > program able to generate that string.
> >
> > Browse for chaitin-kolmogorov complexity/randomness.
> > A fascinating subject :-)
>
> This is exactly what I had in mind (although I wanted to emphasize
> Martin-Loef criteria of randomness). Therefore, what is the length of
> that program in the earlier example?
>
> >From wikipedia: "More formally, the complexity of a string is the
> length of the string's shortest description in some fixed description
> language. The sensitivity of complexity relative to the choice of
> description language is discussed below."
>
> Excuse me but this is not a very practical suggestion. For finite
> object there is no mathematically sound way to establish that
>
> 100000110000000000
>
> is more complex than
>
> 100000000000000000
>
> Again, this is what the earler message "no *objective* measure for
> program complexity" was saying.
>

Well, pick up your prefered language and make it part of your definition
of complexity. You then have a very objective measure. Chaitin uses
a micro-lisp with seven statements.

With all reasonable general purpose languages (including english :-)
complexity (100000110000000000) is greater than complexity
(100000000000000000). In english, the complexity of the later is 6.
The complexity of the former is probably 11.

Of couse, the main problem of C/K complexity is that it is not
computable in general :-)

Have a nice day, Mikito.

--- Raoul

From: Dmitry A. Kazakov on
On Mon, 16 Jan 2006 23:13:45 +0100, Hasta wrote:

> In article <1137447794.122780.151500(a)f14g2000cwb.googlegroups.com>,
> mikharakiri_nospaum(a)yahoo.com says...
>> In general, I agree with Bruce -- there is no objective measure for
>> program complexity. What is the complexity measure of
>> 100000110000000000
>
> Well, there is an objective measure of the complexity of
> 100000110000000000. It's the length of the smallest
> program able to generate that string.

See Richard's paradox.

[ There cannot be objective measure, if no language fixed. ]

--
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: Dmitry A. Kazakov on
On 16 Jan 2006 16:43:10 -0800, Mikito Harakiri wrote:

> Perhaps I have to explain why the concept of "random" sequence is
> important in the context of this complexity discussion. It is
> considered more challenging to generate "random" sequence as compared
> to "nonrandom" one, hence intuitively random sequences are more complex
> than non-random ones.

Huh, 1111111 is exactly as random as 10101101. As a matter of fact, there
is no way to generate random sequences using a FSM. There is no way even to
test if a sequence is a realization of a random process or not. You can
only test some hypothesis H and the answer will be: the probability of H is
in the interval [a,b].

>> What does it mean for an object to be "infinite" in this context, and
>> what does it mean for an infinite object to be "complex" in this context?
>
> Infinite sequence of 0s and 1s versus finite one. Again, the concept of
> random infinite sequence is quite counterintuitive.

Well, randomness as a whole is counterintuitive and there is nothing to do
about it.

> You can predfix
> random sequence with million of 1s, and it would still be a random
> sequence. Once again, there is no way to define what random *finite*
> sequence is.

Egh? What is meant here under non-infinity? That the random variable ceases
to exist after N trials, or that you stop trial after N attempts?

> In layman terms, if you go to Vegas and roulete produces a
> sequence of 10 zeros in a row, you can suspect that roulete is
> defective, but there is no mathematical foundation that would support
> your belief.

Right, which does not ruin Kolmogorov concept of complexity, which fixes
the language. 10101101 looks complex, not because it inherently is, but
solely because of the language humans are using. Change the language and it
might become simple.

--
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: H. S. Lahman on
Responding to Parker...

>>>SQL is not an implementation. What is the difference between locking
>>>yourself to SQL instead of locking yourself to Java? If you want
>>>open-source, then go with PostgreSQL. What is the diff? Java ain't no
>>>universal language either.
>>
>>Of course it's an implementation! It implements access to physical
>>storage.
>
>
> That literally doesn't make sense. It's like saying that a Java interface
> is an implementation because it implements access to the properties of a
> physical instantiation.

You have to step up a level in abstraction. Imagine you are a code
generator and think of it in terms of invariants and problem space
abstraction.

The invariant is that all physical storage needs to be accessed in some
manner. There are lots of ways to store data and lots of ways to
access. Therefore ISAM, SQL, CODSYL, and C's gets all represent
specific implementations of access to physical storage that resolve the
invariant.

Similarly, for interfaces the invariant is that every popular 3GL
provides a type system that allows access to properties. But each 3GL's
type system provides different syntax. Therefore every popular 3GL is a
specific implementation of a type system interface.

>>BTW, remember that I am a translationist. When I do a UML model, I don't
>>care what language the transformation engine targets in the model
>>implementation. (In fact, transformation engines for R-T/E typically
>>target straight C from the OOA models for performance reasons.) Thus
>>every 3GL (or Assembly) represents a viable alternative implementation of
>>the notion of '3GL'.
>>
>
> I think it's fair to say that SQL has, for all its faults, been enormously
> successful, to the tune of a multi-multi-billion dollar industry, and that
> the UML translationist approach has not. It's been over ten years since the
> translationist industry has claimed to have solved the problem of 100
> percent translation, but where is it, it's niche, it's nowhere. Other
> technologies have arrived, e.g. the W3C XML stack and particularly XSLT
> transformation, that dwarf executable UML in application. Why do you think
> that is? What do you think it is about software development that makes
> executable UML marginal, and other technologies like SQL important?

All the world loves a straight man. B-)

There are actually several reasons. The technology for full translation
has actually been available since the early '80s. However, the
technology had not matured enough for good optimization for _general
computing_ until the late '90s. (Note that it took a decade for C
optimization to get remotely close to the level of optimization that
FORTRAN had in '74 and the translation optimization problem is much more
complex.)

[FYI, translation has already been widely accepted in niches for many
moons. The ATLAS test requirements specification language is
universally used in milaero and it is translated directly into
executables. That's a billion dollar business. 4GLs like HPVEE and
LabWindows have also been widely used for electronic system analysis and
design. However, the big translation demo lies in CRUD/USER processing.
Any time one develops an application using a RAD IDE like Access or
Delphi one is essentially using translation. That's a multi-billion
dollar niche that has been around since the '80s.]

Probably the second most important reason for translation to be just
exiting Early Adopter Stage is that using translation requires a major
sea change in the way one develops software. It is not just a matter of
pushing a button and having 3GL or Assembly code pop out the other end.
Translation affects almost every aspect of the development process
from the way one approaches problems to the way one tests. From a
Management perspective, major sea changes in the way things are done
spells RISK and that makes selling translation tough.

I think the third reason is developer resistance and NIH. Most software
developers today have literally grown up writing 3GL code. Trying to
persuade them that, of all the things a software developer does, writing
3GL code is the least important is a tough sell. (Going into a sales
presentation is like stepping back in time to the early '60s trying to
sell COBOL or FORTRAN to a bunch of BAL programmers; the arguments are
the same with only the buzzwords changing.)

A fourth reason is the lack of standardization. Until OMG's MDA effort
all translation tools were monolithic; modeling, code generation,
simulation, and testing were all done in the same tool with proprietary
repositories, AALs, and supporting tools. (Prior to UML, they each had
unique modeling notations as well.) That effectively marries the shop
to a specific vendor. In '95 Pathfinder was the first company to
provide plug & play tools that would work with other vendor's drawing
tools. MDA has changed that in the '00s so now plug & play is a reality.

A fifth reason is price. Translation tools are not cheap because they
require very fancy optimization, graphics for model animation, built-in
test harnesses, and a bunch of other support stuff. (Not to mention
cutting edge automation design.) Vendors want to recover that cost so
one has an Occam's Razor: the Early Adopter market is too small to allow
recovery through shrink-wrap pricing but the market will only grow
slowly if one uses recovery pricing.

However, I think all this is kind of beside the point. Technologies
like SQL, W3C XML stacks, and XLST transformation are just computing
space /implementations/ that a transformation engine can target. Part
of the transformation engine's optimization problem is picking the right
computing space technology for the problem in hand from those available.
IOW, apropos of the opening point, the application developer using
translation is working at a much higher level of abstraction so
SQL/XML/XLST concerns belong to a different union -- the transformation
engine developer.


*************
There is nothing wrong with me that could
not be cured by a capful of Drano.

H. S. Lahman
hsl(a)pathfindermda.com
Pathfinder Solutions -- Put MDA to Work
http://www.pathfindermda.com
blog: http://pathfinderpeople.blogs.com/hslahman
(888)OOA-PATH