[
Lists Home |
Date Index |
Thread Index
]
On 8/9/05, Bullard, Claude L (Len) <len.bullard@intergraph.com> wrote:
<snip/>
>
> >I'd like to believe that if you can find models (markup, DB, OO or
> >otherwise) that have wide applicability (and result in advantage for
> >the computer) you'll find that you have models that have a good chance
> >of being being widely accepted by the humans involved. See below...
>
> Wide applicability: that's a good metric. At the very least, it
> takes the audience/listener into account. On the other hand, as
> noted below, when something is widely applicable, is it semantically
> strong, that is, very meaningful?
I think the axis of precision and general understanding are
orthogonal. That doesn't mean it's easy to discover the models that
capture high degrees of both. Rather, it seems fiendishly difficult.
One big hurdle is the amount of time that it takes for complex
knowledge to be generally accepted. As a poor example, at one point
Einsteins relatively was considered near impossible for most people to
understand, now-a-days we've got string theory covering that
territory...
> (wandering off topic but maybe
> there is a measure of structure (however we define that) that
> can be applied to determine when a markup design is widely applicable).
If there is, it's going to be similar to the metrics used for
analysing code complexity: number of external references, degree of
separation between references, number of different terms, degree of
encapsulation, number of layers of inheritance/dependency, etc.
Probably a PHD thesis or two hiding in that mess somewhere (though I'm
sure it's already been done)...
<snip/>
> Tell me who gets to name the names so we can
> get on with this" trope is recommended to markup professionals.
> In other words, can we ever really separate the politics of naming
> from the craft?
Yes, that is the $64,000,000 (inflation adjusted) question. Given the
complexity and opaqueness of many of the XML "standards" I think we're
a long way from having anything like trusted experts in the field for
the most part.
> >Almost forgot to answer your question: if a good organic model needs
> >"fixing" then it wasn't that good in the first place; too much assumed
> >knowledge. So, IOW, I'd vote no...
>
> Interesting POV. The problem is, good for whom (see last para)?
Good for everybody (he, he)..
> HTML and XML demonstrate something I find fascinating: scalability
> is inversely proportional to semantic load.
I don't think it's scalability, I think it's rate of uptake. That's
common sense: make things easy to understand and many people will be
able to use them. That doesn't necessarily give us scalability, for
that you need good interoperability. If anything, the 1000's of
competing XML standards demonstrate that at an
semantic/Ontological/common understanding level we have not even
scratched the surface of scalability.
> The more it means, the
> less useful it is for the greatest number. That is somewhat the
> Principle of Least Power, so we have to be very careful how we
> apply some principles. Things of general utility tend to be
> few because one doesn't need many, so differentiation becomes
> cosmetic. Thus, branding.
Great, now we'll get the Nike and Rebock business interchange
languages to add to the mix...
--
Peter Hunsberger
|