OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.


Help: OASIS Mailing Lists Help | MarkMail Help

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: Picking the Tools -- Marrying processing models to data models


> Reusability is one of the biggest lies of OO.  I personally think that
> code reusability is the philosopher's stone: it would be a source of
> never-ending value, but there is the small problem that it doesn't exist.
> There is nothing special about OO that makes code resuse magically easier.
> The strange theory that inheritance is the key to object re-use is one of
> the reasons I've heard for the insistence of some component models on
> making inheritance part of large-granularity interfaces.  This doesn't
> lend any magic reusability to components than it does to class libraries.
> The important thing is to focus on reusing data rather than code, and this
> is a separate issue from whether the code is procedural, functional, OO,
> etc.


> Interface inheritance or code inheritance, Uche?

Differeing degrees for either.

> Interface inheritance makes one constrain the communication
> model.

Such constraints can be imposed using other modeling tools, as I've
mentioned.  Besides, such constraints are a matter for typing in my
opinion, not the general wooliness of "interface inheritance".

For instance, I should be able to specialize functionality using
delegation rather than inheritence, and I can do so as long as I have a
type system that allows this (i.e. not OO).

In short: I can have strong constraints without interface inheritance.

> That fits closer to the XML view of a world of
> known message types or document types (someone really,
> what IS the difference) or types that must carry their
> definition packaged with them (SGML).

> Code inheritance
> makes for an application that is initially very efficient
> but over time grows brittle and side-effect ridden.  Reuse
> in the context of code inheritance is a limiting concept.

Code inheritance is no more uniformly bad than interface inheritance is
uniformly good.  (I know you didn't really make either statement, but I've
heard such comments from others).

If implementation inheritance is cleanly separated from interface
inheritance, as they are in properly-constructed mix-in classes, the
brittleness largely goes away.  The developer finds themselves almost
forced to constrain the generic code to small and consistent patterns of
behavior.  Most of the brittleness of botched inheritance comes from cases
where programmers confuse the interface they need to sub-class with
the implementation they'd like to reuse.

Side trip:

When genericizing code, there is *no* excuse for *expert* programming.
Java tries to protect developers from themselves by allowing only single
inheritance of implementation, but they get it wrong from two fronts: it's
not really solid-enough protection because developers can easily botch
single inheritance.  And it makes it harder for programmers who know what
they're doing to properly genericize their code.

> Yet even for the data, this same mirage of reuse occurs.
> What can be #FIXED and what should be #IMPLIED or #REQUIRED,
> even the bugaboo or element or attribute, reuse in data types
> is also problematic.  Thus all the weirdness of
> architectures and element Elements.  MMTT....

I can buy that even data reuse might be a mirage: I don't know enough to
refute the claim, and I wonder whether anyone does.  It seems that all the
effort has been focused on code reuse just to find that it's an illusion,
and perhaps if we'd aimed all that effort at data modeling all the while,
it might have borne more fruit.

> Perry isn't wrong, maybe a little elaborate.  In effect,
> a network of encapsulated nodes only says "all politics,
> (policies) are local".   Yet somewhere, predictability
> and verifiability must become a consideration for reliability.
> It must not only do something time after time, it must
> do the same thing time after time or tell me exactly when
> and what is to be different.  Otherwise, we shake instead of gavotte.

I didn't follow his argument in his message.  I don't see how the sensible
idea that all policies are local (for a reasonable and flexible definition
of "local") means that code itself must be attached at the hip to data,
and must indeed be used to shape the data.

> HLAL (High Level Authoring Languages) are designed to make
> the author view of data the most important one.  It isn't
> that the author is KingOfTheWorld, he is just the one of
> a series of agents that populate an n-dimensional dataspace
> of time, name, content, and policy (a process control).
> We don't design these with OOP objects in mind necessarily
> because we may not want to demand a particular rendering,
> or process.  We simply want to capture information in
> the metaphor most natural to the person.role doing the
> job, then XSLT into whatever the objects need.
> XSLT is key.  The existence of the transform asserts
> the relationship.

I can buy this as well, but note that it's very different from OO.

Uche Ogbuji                               Principal Consultant
uche.ogbuji@fourthought.com               +1 303 583 9900 x 101
Fourthought, Inc.                         http://Fourthought.com
4735 East Walnut St, Ste. C, Boulder, CO 80301-2537, USA
Software-engineering, knowledge-management, XML, CORBA, Linux, Python