[
Lists Home |
Date Index |
Thread Index
]
Actually, Len's comments remind me of a discussion I had with a member
of my team about how to design a non-Web services XML message
processor. There are about 10 items in the message (which can be quite
large) which are of interest to the program. What I wanted him to do
was somehow (I really didn't care how... SAX, XPath, XQuery) write the
tool so that it would only process those bits and then get out of the
way (there is a transformation step too, but speed wasn't really
critical, so XSLT would've been a reasonable approach).
What did he do?
Yep. XMLBeans. "Works great!" was the response to my somewhat
horrified, "you did what??"
I didn't want the implementation of the tool to give a toss about the
message, but now if anything changes, we've got to recompile/redeploy.
Of course it was quicker/easier to develop, but I'm not really sure
about the TCO.
This pretty much proves Len's point, I think. No matter how many
options for flexibility exist in isolating the parts of the vocabulary
that really matter for a given application (assuming that set is
significantly less than 50% of the document), there will always be
people with fancy tools and magic buttons to render all that flexibility
moot. From Len's description: pragmatism in action. Of course, that
means dealing with the inevitable change is someone else's problem...
I won't even mention the time canonical schemas were modified in-house
by one of our partners to add "missing" field length validation and then
the ensuing dismay when we tried to give them new ones that were
structurally equivalent (without starting this argument ;), but in which
the namespace had been altered slightly, and, of course were again
"missing" the field sizes. Ok, I mentioned it.
I think that the vast majority of the members of this list wouldn't have
done that, but these sorts of problems always seem to crop up no matter
how hard you try to prevent them. Aside from forced subscriptions to
the list and a whack upside the head with a couple of Rusty's books ;),
I'm not sure what the best solution is. I think we're a bit out
numbered...
ast
On Tue, 2006-02-28 at 14:26, Bullard, Claude L (Len) wrote:
> I don't think it weird but I'm not surprised by that. It is pretty
> simple. We use markup over delimited ASCII
> because we want to put more semantic hints as to the producer's
> intent. If we want stronger hints, we go
> to a language like RDF to provide stronger linking among the signs.
> If we want to send our intent and ensure
> it can't be misinterpreted, we package up the
> intentions/functions/methods with the data and send that.
>
> So once again, if there is to be a pragmatic layer, and I assume that
> means something codified in the
> program or code that flips the bits on the machines, then other than
> sharing a philosophy of meaningful
> utterances, norms and affordances, how would one communicate those
> utterances, norms, and affordances?
> IOW, what is above semantics? Pragmatics. How do we implement
> pragmatics? Objects.
>
> Other means may be possible but that is a first position. Even an
> interpreter for a set of RDF assertions
> attempting to evaluate a text requires a functional contextualizer.
>
> len
***************************************************************************************************
The information in this email is confidential and may be legally privileged. Access to this email by anyone other than the intended addressee is unauthorized. If you are not the intended recipient of this message, any review, disclosure, copying, distribution, retention, or any action taken or omitted to be taken in reliance on it is prohibited and may be unlawful. If you are not the intended recipient, please reply to or forward a copy of this message to the sender and delete the message, any attachments, and any copies thereof from your system.
***************************************************************************************************
|