From: H. S. Lahman on
Responding to Bouma...

>>Here we disagree somewhat. What you are describing here is
>>implementation inheritance (i.e., the subclass overrides a defined
>>property implementation that already exists in a superclass). This
>>was one of those ideas in early OO development that seemed like a
>>good idea at the time but in practice led to more problems than it
>>cured. However, in modern practice one tries to avoid implementation
>>inheritance for a variety of reasons, of which fragile
>>maintainability is even more important that the potential for LSP
>>problems. So in a well-formed OO application responsibilities are
>>only implemented at the leaf subclass level, which creates an
>>environment of pure substitution between subclass implementations of
>>a superclass responsibility.
>
>
> Isn't that somewhat like bending reality to meet a certain definition
> D of said reality? Because I find the requirement to implement type
> definitions only at the leaf level too restrictive, and to me it seems
> this requirement is invented not to be practical but to be able to meet
> a given principle's demands.

The primary goal of the OO paradigm is to provide software that is
maintainable in the face of volatile requirements over time. The
debates over LSP themselves demonstrate that polymorphic dispatch in
generalization structures is fragile, especially during maintenance.
Practical experience has indicated that implementation inheritance is
especially prone to foot-shooting during maintenance.

That's because overrides add an addition dimension to resolving object
properties. Without overrides the client can safely access the
generalization structure at the highest level when a responsibility
appears -- always. To provide a client collaboration the developer only
needs to understand what the superclass' responsibility semantics are.

In contrast, with overrides the developer must understand the entire
tree and _where the overrides are located_. For example:

[Service]
+ doIt()
A
|
+--------+--------+
| |
[ServiceA1] [ServiceB1]
<+ overrideA>
A
|
+-------+-------+
| |
[ServiceA11] [ServiceA12]

The fact that the override exists for [ServiceA1] indicates that the
behavior Service.doIt is unacceptable for that limb of the tree. One
already has an LSP problem because that notion of 'unacceptable' already
says that Service.doIt and ServiceA1.doIt are not substitutable. That
lack of substitutability is what forces the developer to understand the
whole tree when deciding where a collaboration should access the tree.

Let's assume that only ServiceA1.doIt is acceptable to a particular
client, the developer implements the collaboration only with members of
the [ServiceA1] set.

Now let's assume that some maintenance is done:

[Service]
+ doIt()
A
|
+--------+--------+
| |
[ServiceA1] [ServiceB1]
A
|
+-----+--------------+
| |
[Intermediate] [ServiceA13]
<+ overrideA>
A
|
+-----------------+
| |
[ServiceA11] [ServiceA12]

This has immediately broken the original client accessing [ServiceA1]
because it can now access ServiceA13.doIt, which will have the
unacceptable Service.doIt behavior. IOW, when the maintenance was done
the access of the collaboration should have been modified to access only
members of the [Intermediate] set.

That means that whenever one changes the level at which overrides are
done one must go an check /every/ client context to see whether they
have been broken. That is a major no-no for efficient maintenance and
it opens up a lot of opportunities for defects. Problems with this sort
of thing can be quite subtle and intermittent, which makes diagnosis
tricky and increases the cost of repair. For that reason most shops
have decided to sacrifice the convenience of implementation inheritance
to avoid downstream maintenance headaches and field escapes.

>
> Because what about the situation where the developer has to work with
> a library where a given class C has a processing method M and this
> method M can be replaced by your own if you want to, but you don't have
> to. In effect, doing implementation inheritance. Is this then 'bad
> practise' ? If so, why? I don't see it, especially because the library
> has to supply a functional M, however also wants to be flexible so that
> a developer could replace the method with an override in a derived
> class.

There are some mitigating factors in this example. As you indicate some
libraries are /designed/ to be overridden. I am not a fan of this, but
if the library is third party software, you probably don't have a
choice. Note, though, that one of the most common problems with
incompatibilities between third party class libraries is doing things
like overriding the 'new' operator within the library.

A second factor is that library classes are commonly regarded as
architectural infrastructure because of their reuse. Generally
architectural infrastructure is supposed to be stable, so requirements
changes are unlikely to affect what one overrides.

A related factor is that any override is likely to be done at the
highest level of derived classes unique to the problem in hand (i.e.,
classes that the developer creates). So the override will apply to all
collaborations with any of the derived classes. IOW, the original
library behavior is effectively invisible to the application and will
never be executed.


*************
There is nothing wrong with me that could
not be cured by a capful of Drano.

H. S. Lahman
hsl(a)pathfindermda.com
Pathfinder Solutions
http://www.pathfindermda.com
blog: http://pathfinderpeople.blogs.com/hslahman
"Model-Based Translation: The Next Step in Agile Development". Email
info(a)pathfindermda.com for your copy.
Pathfinder is hiring:
http://www.pathfindermda.com/about_us/careers_pos3.php.
(888)OOA-PATH



From: H. S. Lahman on
Responding to Kazakov...

>>>>Not if they are abstracted properly for the problem in hand. I agree
>>>>that OO abstraction is limited and, therefore, one cannot exactly match
>>>>the underlying entity. That is pretty much in the nature of
>>>>abstraction. But one can match reality closely enough in particular
>>>>problem contexts to satisfy LSP.
>>>>
>>>>Better yet, one can ensure robustness in the OO application by
>>>>/requiring/ LSP compliance for the particular problem in hand. IOW, one
>>>>does not ignore LSP when a particular set of property abstractions don't
>>>>support compliance properly for the problem in hand; one finds a
>>>>different suite of property abstractions that do support it.
>>>
>>>Again, it is a heavy conceptual disagreement between us. In my view, a
>>>properly functioning program is not equivalent to LSP satisfied. You are
>>>mixing domains and languages here. Basically, it is a fly and aerodynamics,
>>>once more.
>>
>>OBviously a functioning program is going to depend on a lot more than
>>just LSP being being satisfied. But LSP compliance is still a good way
>>to ensure proper functioning.
>
>
> It cannot. You are still mixing things. A trivial counter example would be
> an x86 program executed on a 68k processor. CPU does not know OO!

And you are still resorting to implementation issues that are so far
below the level of OOA/D that they are completely irrelevant. B-)

I can't respond to this sort of implementation stuff because it
represents a different problem space at a far lower level of abstraction
than where one deals with LSP in an OO context. It is not even
tenuously related.

>>I don't like the fly and aerodynamics analogy. LSP in an OO context is
>>about defining the /intrinsic/ properties of a fly.
>
>
> I disagree. LSP is about properties of flying things, insects, species. It
> is not about a fly. It is about flies. More generally, it is about
> interaction of sets of things with their elements and other sets (sounds
> much like OOA/D to me, but never mind). When you reduce it to individual
> entities, then all substitutability problems automatically disappear. But
> it would be absolutely unrealistic agenda.

I agree that one /might/ be able to apply LSP in such grandiose scope.
But one doesn't do that in an OO context. One applies it in within a
very narrowly defined and constrained scope.

As far as individuals are concerned, I don't think I understand your
point. One defines intrinsic characteristics for a 'fly' that are
highly tailored to the problem in hand. But those characteristics apply
to all individual flies one might encounter in the problem context. And
if the problem context requires that one distinguish between fruit flies
and deer flies, one will have a generalization relationship.

And if some properties of flies needed for the problem are shared
between fruit flies and deer flies, one may have polymorphic dispatch.
In that case, one can define LSP compliance _within the problem context_
so that deer flies and fruit flies are substitutable. I just don't see
any problem in doing that.

>>To that end one
>>needs to abstract -- at most -- a very restricted view of aerodynamics
>>that is needed to describe how the fly itself works in the problem
>>context. One is not interested in how aerodynamics applies to airplanes
>>or Frisbees.
>
>
> It is not required, but you never know upfront what are the consequences of
> something not required. This problem to solve is actually
> "substitutability" of models.

Ah, but the OO paradigm has chosen a different road to robustness than
trying to be prescient about all possible things that might change in
the future. The paradigm tacitly admits that is impossible. So,
instead, it chooses to deal exclusively with the problem in hand rather
than what might be. But it does so in such a manner that the resulting
solution for today's problem can be changed easily later when the
requirements change.

So satisfying LSP in today's problem solution may not work for
tomorrow's problem. But in an OO context that's fine because the OO Way
is to make sure that one can modify the today's LSP compliance easily if
and when it needs to be changed tomorrow.

>>Yes and no. My issue was around individual members of the set. Sure,
>>one can deal with larger integers in a variety of ways, with or without
>>objects. But when you do so you are artificially subdividing the
>>mathematician's notion of 'integer' and modifying it, which was my
>>point. Thus when the 3GL defines "int" as a fundamental type, it ain't
>>your mathematician's 'integer' any more that your Silver Ghost object is
>>your rare car dealer's notion of 'Silver Ghost'.
>
>
> Yes, it is not integer. Further the actual problem is not infiniteness of
> the integer set, or squares, for they are infinite as well and even
> individual squares have properties you will never be able to model. The
> problem is that any abstraction, any move you make in the solution space
> inherently violates something. Surely you can get rid of this by descending
> abstraction levels down to treating each memory bit individually, but this
> cure is worse than the disease.

But so long as one is not seeking some universal LSP compliance, one can
provide abstractions that ensure LSP compliance in the limited context
of a particular problem in hand.

But we are going in circles on this so I think it is time once again to
agree to disagree.


*************
There is nothing wrong with me that could
not be cured by a capful of Drano.

H. S. Lahman
hsl(a)pathfindermda.com
Pathfinder Solutions
http://www.pathfindermda.com
blog: http://pathfinderpeople.blogs.com/hslahman
"Model-Based Translation: The Next Step in Agile Development". Email
info(a)pathfindermda.com for your copy.
Pathfinder is hiring:
http://www.pathfindermda.com/about_us/careers_pos3.php.
(888)OOA-PATH



From: ulf on
H. S. Lahman wrote:
> Responding to Ulf...
> > ...
> > I intended not to describe implementation inheritance but specification
> > inheritance.
> >
> > The problem is not inherited behavior implementations are overridden.
> > The problem is: What the behavior implementation finally is in the
> > concrete instance is not compatible to the assumptions under which
> > interactions involving superclass instances were modeled. For the
> > problem to arise it does no matter whether these assumptions were
> > derived from actual behavior implementations in the superclass or from
> > specifications of behavior in the superclass or intuitively from the
> > superclass's name and informally described purpose.
>
> What I was responding to was the notion of overriding the superclass in
> the subclass. If one regards this as specification inheritance rather
> than implementation inheritance, then such an override is impossible in
> a well-formed OO application. That is because the semantics of the
> responsibility is defined in one and only one place: the highest level
> in the generalization tree where it is identified. All specializations
> in a direct line of descent /must/ honor that semantic definition or the
> application is mal-formed.

I feel sth is going wrong in this discussion.

1. Could it be that in order to understand and evaluate your argument
it is indispensible for me to exactly know what you mean by a
"responsibility"? It must be sth linguistic for it has a meaning
("semantics of a responsibility") and it must have an existence at
runtime ("clients may invoke them one a peer-to-peer basis") ...

2. To me it seems that you are effectively saying: If one models an
object's behavior upon receipt of a message m in a class X then one
cannot model it again in subclass Y. I have not seen a constraint to
this effect in the UML specification, and I have never heard of a
design guideline to that effect. It must be possible to refine the
modeling of a behavior from a more general, unspecific model in the
superclass context to a more specific, refined model in the context of
the subclass. Of course the magic word here is *refine*. There has be
some work on the topic of when the model in the subclass is a
refinement of the model in the superclass for different modelling
techniques, including state machines (cf. UML) and axiomatic
specifications (cf. DbC). I am not aware of corresponding work on
collaboration models (- maybe work on trace refinement and process
algebra could be a starting point here?)

3. Unfortunately you just clarified what you meant in your previous
reply to my posting but did not respond to my clarification of what I
meant in that posting: The root cause for the problem to be solved by
LSP is not overriding but clients' assumptions about the superclass.
Meaning there is a dependency on assumed properties of superclass
instances. In a given system there might be no problem because all
current realizations of that class satisfy these assumptions. When you
add a new realization to the system, Liskov and Wing (in their TOPLAS
paper) offer two choices for respecting the LSP:

- Either you know and respect these assumptions based on dependency
inversion: the superclass specifies the legal assumptions, the client
makes only these assumptions, and the realization implements these
assumptions.
- Or you stay on the safe side by your realization doing all changes
that are visible through the superclass interface only through methods
inherited from an existing realization.

---
> Note that such compliance does not depend upon client interactions. The
> tricky part where LSP comes into to play is defining the semantics of
> the superclass responsibility correctly for the client context.
>
> <example of Predator and Prey>

My take on this: The most domain-faithful modeling would be:
The Predator's attack is an event, and Prey has a handle for this event
("OnAttack", or as you called it in the last variant "attacked"). The
most common event handling strategy is flee() (which means to run away
in case of Prey with legs and to fly away in case of Prey that can fly,
etc.) - BTW what is a Quail?

> The hard part of LSP is not making the subclasses conform to the
> semantics of the superclass responsibility.

I would not have thought of this as an LSP problem, but yes you can see
it as one: If a message called run() or flee() were implemented in
Elephant by a method that does not move the Elephant, then a client
context - eg. a Predator :) - might have its expectations violated and
not function properly.

> So if the only
> [Prey] that are relevant are [Impala] and [Zebra], then the first
> solution is adequate.

Yes. adequate for the limited domain where all Prey is legged and
fleeing.

---
> >>My second problem here is with the implication that somehow there is s
> >>dependence between client messages that affects substitutability.
> >
> > Huh? The existence of such dependencies is a fact: Since messages
> > received in the past can influence the state an object is in now, they
> > can affect the object's externally visible behavior now, and thus can
> > affect its substitutability.
>
> Not in a well-formed OO application. By definition object behavior
> responsibilities are intrinsic and self-contained so they cannot depend
> on what other objects do (other than through state variables).

Here seems to be a misunderstanding: I talked about just one object and
its state.

> This is most obvious when one uses object state machines. ...
>
> The values of the input state variables may change from one invocation
> to another and that will affect the results of executing the behavior.
> But those values cannot affect the semantics of the responsibility and,
> consequently, the substitutability.

Here we are again. What is your "semantics of the reponsibility"? Is it
my "modeled/specified behavior upon receipt of a message"?

Of course the model/specification of the behavior is not changed by the
values. But what is relevant to LSP: The values can change what the
object actually does, and thus - in as far as this is visible to the
client - may change what the client sees and thus may have a dependency
on.

> They can only affect
> substitutability indirectly by determining how relationships are
> instantiated.

?

> But that is prior to invocation of the
From: Frans Bouma on
H. S. Lahman wrote:

> Responding to Bouma...
>
> > > Here we disagree somewhat. What you are describing here is
> > > implementation inheritance (i.e., the subclass overrides a defined
> > > property implementation that already exists in a superclass).
> > > This was one of those ideas in early OO development that seemed
> > > like a good idea at the time but in practice led to more problems
> > > than it cured. However, in modern practice one tries to avoid
> > > implementation inheritance for a variety of reasons, of which
> > > fragile maintainability is even more important that the potential
> > > for LSP problems. So in a well-formed OO application
> > > responsibilities are only implemented at the leaf subclass level,
> > > which creates an environment of pure substitution between
> > > subclass implementations of a superclass responsibility.
> >
> >
> > Isn't that somewhat like bending reality to meet a certain
> > definition D of said reality? Because I find the requirement to
> > implement type definitions only at the leaf level too restrictive,
> > and to me it seems this requirement is invented not to be practical
> > but to be able to meet a given principle's demands.
>
> The primary goal of the OO paradigm is to provide software that is
> maintainable in the face of volatile requirements over time. The
> debates over LSP themselves demonstrate that polymorphic dispatch in
> generalization structures is fragile, especially during maintenance.
> Practical experience has indicated that implementation inheritance is
> especially prone to foot-shooting during maintenance.
>
> That's because overrides add an addition dimension to resolving
> object properties. Without overrides the client can safely access
> the generalization structure at the highest level when a
> responsibility appears -- always. To provide a client collaboration
> the developer only needs to understand what the superclass'
> responsibility semantics are.
>
> In contrast, with overrides the developer must understand the entire
> tree and _where the overrides are located_. For example:
>
> [Service]
> + doIt()
> A
> |
> +--------+--------+
> | |
> [ServiceA1] [ServiceB1]
> <+ overrideA>
> A
> |
> +-------+-------+
> | |
> [ServiceA11] [ServiceA12]
>
> The fact that the override exists for [ServiceA1] indicates that the
> behavior Service.doIt is unacceptable for that limb of the tree. One
> already has an LSP problem because that notion of 'unacceptable'
> already says that Service.doIt and ServiceA1.doIt are not
> substitutable. That lack of substitutability is what forces the
> developer to understand the whole tree when deciding where a
> collaboration should access the tree.

I see your point when you look at the material from the POV you're
describing, however what about this:
at the point of ServiceA1, the TYPE ServiceA1 requires a specialization
of doIt(), the original doesn't fit in anymore. IMHO that's tied to the
TYPE ServiceA1. If you're using a ServiceA1 type, you thus know what
that type implies and does, how else would you decide which type to
use? So if ServiceA1 documents that it overrides doIt(), you therefore
know it has its own implementation of doIt. If you NEED the
implementation of the superclass, don't use ServiceA1.

> Let's assume that only ServiceA1.doIt is acceptable to a particular
> client, the developer implements the collaboration only with members
> of the [ServiceA1] set.
>
> Now let's assume that some maintenance is done:
>
> [Service]
> + doIt()
> A
> |
> +--------+--------+
> | |
> [ServiceA1] [ServiceB1]
> A
> |
> +-----+--------------+
> | |
> [Intermediate] [ServiceA13]
> <+ overrideA>
> A
> |
> +-----------------+
> | |
> [ServiceA11] [ServiceA12]
>
> This has immediately broken the original client accessing [ServiceA1]
> because it can now access ServiceA13.doIt, which will have the
> unacceptable Service.doIt behavior. IOW, when the maintenance was
> done the access of the collaboration should have been modified to
> access only members of the [Intermediate] set.

Yes sure it has broken it, but didn't ServiceA1 change? Didn't the
behavior of ServiceA1.doIt() change? It did. Because it did, the TYPE
ServiceA1 changed, because as I described above, ServiceA1.doIt()
apparently needed a different implementation because the TYPE suggested
it. Changing that, by factoring it out to another class, changes the
TYPE, and therefore you can't use the same lib with the same client,
need to bounce the version number.

I understand your example, though I don't see why it's thus a bad
feature by definition. Calling a system function isn't a bad feature
also just because I can call format c:

> That means that whenever one changes the level at which overrides are
> done one must go an check every client context to see whether they
> have been broken. That is a major no-no for efficient maintenance
> and it opens up a lot of opportunities for defects.

That's not necessary: because the Type has changed, they're broken by
definition. You then have to test again indeed if they're working with
the new lib. But that's required anyway, because unittests will show
that your new lib, with the new classes will fail your tests because
the behavior of a method in a shipped interface has changed.

> Problems with this sort of thing can be quite subtle and
intermittent,
> which makes
> diagnosis tricky and increases the cost of repair. For that reason
> most shops have decided to sacrifice the convenience of
> implementation inheritance to avoid downstream maintenance headaches
> and field escapes.

Well, it depends on your language of course. In java for example,
where every method is virtual unless stated otherwise, you will run
into this more often than in other languages where virtual methods are
a decision made by the library writer.

Your restriction also limits the # of l
From: H. S. Lahman on
Responding to Ulf...

>>>...
>>>I intended not to describe implementation inheritance but specification
>>>inheritance.
>>>
>>>The problem is not inherited behavior implementations are overridden.
>>>The problem is: What the behavior implementation finally is in the
>>>concrete instance is not compatible to the assumptions under which
>>>interactions involving superclass instances were modeled. For the
>>>problem to arise it does no matter whether these assumptions were
>>>derived from actual behavior implementations in the superclass or from
>>>specifications of behavior in the superclass or intuitively from the
>>>superclass's name and informally described purpose.
>>
>>What I was responding to was the notion of overriding the superclass in
>>the subclass. If one regards this as specification inheritance rather
>>than implementation inheritance, then such an override is impossible in
>>a well-formed OO application. That is because the semantics of the
>>responsibility is defined in one and only one place: the highest level
>>in the generalization tree where it is identified. All specializations
>>in a direct line of descent /must/ honor that semantic definition or the
>>application is mal-formed.
>
>
> I feel sth is going wrong in this discussion.

sth means???

>
> 1. Could it be that in order to understand and evaluate your argument
> it is indispensible for me to exactly know what you mean by a
> "responsibility"? It must be sth linguistic for it has a meaning
> ("semantics of a responsibility") and it must have an existence at
> runtime ("clients may invoke them one a peer-to-peer basis") ...

Responsibilities are basic OO properties. All object properties are
defined in terms either a responsibility for knowledge or a
responsibility for behavior. IOW, objects are defined in terms of What
they should know and/or What they should be able to do.

These are really very basic definitions that are covered in any text on
OOA/D. This and another question about relationships below inspires me
to ask: what is your level of OO exposure?

>
> 2. To me it seems that you are effectively saying: If one models an
> object's behavior upon receipt of a message m in a class X then one
> cannot model it again in subclass Y. I have not seen a constraint to
> this effect in the UML specification, and I have never heard of a
> design guideline to that effect. It must be possible to refine the
> modeling of a behavior from a more general, unspecific model in the
> superclass context to a more specific, refined model in the context of
> the subclass. Of course the magic word here is *refine*. There has be
> some work on the topic of when the model in the subclass is a
> refinement of the model in the superclass for different modelling
> techniques, including state machines (cf. UML) and axiomatic
> specifications (cf. DbC). I am not aware of corresponding work on
> collaboration models (- maybe work on trace refinement and process
> algebra could be a starting point here?)

That is not quite what I am saying. I am saying any given object in a
generalization has a suite of responsibilities that are defined by the
object's membership in /all/ of the various classes in the
generalization in a direct line of descent from root superclass to the
leaf subclass to which the object belongs.

However, the semantics of WHAT the responsibilities are cannot be
multiply defined. That is, the object cannot be a member of two sets
where the property is defined differently (i.e., in a conflicting
manner). For that reason shared responsibilities are defined in and
only in the highest level superclass where all members have the
responsibility. That is just basic Class Model normalization.

What I think you are talking about here (specific refinement from a
generalization) are different implementations of the same responsibility
semantics. Thus I can implement a responsibility to sort a list of
values in ascending order with a bubble sort, an insertion sort, a
Quicksort, or any other sort algorithm in different subclasses.
Similarly, I can implement a responsibility to compute withholding tax
differently for a SalariedEmployee subclass than for a sibling
HourlyEmployee subclass. But in both cases the semantics of the
responsibility is exactly the same from the perspective of an external
client for each implementation pair.

Thus in an OO context LSP comes down to ensuring that the superclass
responsibility semantics is defined with sufficient detail to be useful
to clients while also ensuring sufficient generality so that it is
consistent with all the subclass implementations. Trying to do that in
general is virtually impossible, but is becomes possible within the
limited context of a particular problem solution.

>
> 3. Unfortunately you just clarified what you meant in your previous
> reply to my posting but did not respond to my clarification of what I
> meant in that posting: The root cause for the problem to be solved by
> LSP is not overriding but clients' assumptions about the superclass.

We are going in circles. I was only responding to your statement about
overriding. In your clarification you said that the overrides in
question were specification overrides, not implementation overrides as I
assumed. And my response to that was that there are no specification
overrides in an OO generalization.

I have no disagreement with you last sentence immediately above. In
fact, it is pretty much the point I was trying to make about LSP with
Kazakov.

> ---
>
>>Note that such compliance does not depend upon client interactions. The
>>tricky part where LSP comes into to play is defining the semantics of
>>the superclass responsibility correctly for the client context.
>>
>><example of Predator and Prey>
>
>
> My take on this: The most domain-faithful modeling would be:
> The Predator's attack is an event, and Prey has a handle for this event
> ("OnAttack", or as you called it in the last variant "attacked"). The
> most common event handling strategy is flee() (which means to run away
> in case of Prey with legs and to fly away in case of Prey that can fly,
> etc.) - BTW what is a Quail?

In fact, that is pretty much exactly what OOA/D does when it separates
message and method. In OOA/D messages are announcements about how the
sender