[
Lists Home |
Date Index |
Thread Index
]
On Sat, 04 Jan 2003 12:03:50 -0500, Roger L. Costello <costello@mitre.org>
wrote:
>
> Instead, define precisely one, unambiguous "interchange standard". This
> becomes the "lingua franca" interchange format.
Sigh. The eternal optimism of nerds ;-) that human factors can somehow be
assumed away. See http://www.sys-con.com/xml/articleprint.cfm?id=314 for
Sean McGrath's take on this:
"My contention is that the failed industry-standard schema initiatives of
the past did not fail for technical reasons; they failed for human reasons.
There is a rich lore of experience here that the new wave of XML schema
designers could do worse than mine for valuable insights. Those that don't
learn from the mistakes of the past truly are doomed to repeat them. Apart
from paying due regard to history, I think XML schema design needs to take
a leaf out of the extreme programming book. Start with the customer (human)
, do the smallest thing than can possibly work, and start using it. Never
lose sight of the human creating the XML content or the human writing
software to process the content."
>
> 1. You should never interchange data that may be calculated. Interchange
> using the "fundamental data", from which calculations may be
> done.
Seems like a useful heuristic, but not a Thou Shalt Not ... Ya gotta
consider the processing capabilities of the devices producing and consuming
the data, frequency with which data are interchanged, the bandwidth
available ... I'm becoming a bit of an Extreme Programming advocate in my
dotage -- listen to what the customer wants and do the minimum that can
achieve it before you worry too much about large scale abstractions and
grand principles.
>
> 2. The more different ways data can be used, the higher the value of the
> data, the less application-specific it is, and the more suitable it is
> for data interchange.
Sure, but again see the XP people's rants on this subject: a lot more
projects have failed because the developers put their effort into designing
generalizeable, extensible abstractions up front than have failed because
the code had to be refactored as requirements and technology changed.
>
> 3. Once you have identified good interchange data you then need to
> determine how to represent it. There may be various ways to represent
> it. However, you should pick precisely one, unambiguous representation
> for which there are algorithms to map to the other representations. This
> becomes the "interchange standard". Defining an interchange
> standard greatly reduces the complexity of all
Seems like a good principle, but also sounds like a political rathole
because everyone wants the data to be in a form that they can easily
produce or consume. In a Technocracy where decisions are made on the basis
of the Right Thing, this would be a very workable principle. Someone tell
me when they start to sell tickets on the starship heading for that planet
:-)
>
> ...
>
> Okay. That's a start. I invite you to present your ideas on what
> characterizes good interchange data. I invite your input on how to
> better express what I have presented above (e.g., do you agree with my
> term "high value data" to describe the data that suitable for being
> interchanged? Can you think of a better/more accurate term?)
Sorry to be more than my usual cynical self. I think you've got some good
ideas here. I'd be more interested in working backwards from success
stories to see if these principles DO seem to have been exhibited than from
adopting them as best practice guidelines in the absence of empirical
evidence. Still, I agree with the general principle that the only standard
schemas we're likely to encounter anytime soon will be EXCHANGE formats,
and anything that can be done to rationalize the process of devising
standardized exchange formats is a good thing.
|