OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: typing (was RE: Personal reply)



Discoverable services will depend on a requirements match and it isn't 
just dynamic machinery (that is probably a pipe dream). 
What I read says an enterprise engineer discovers the 
first level, reads the service descriptions, and by 
citation, discovers if the schemas or DTDs provide 
what they need. Then they create scripts that route 
data among the applications (eg, XLang).  So, this is automation 
of how it is done now with the enterprise engineer 
stepping into some roles that used to be done 
by logistics analysts.

Also, one of the requirements for using applications 
often means the local requirements are adjusted to 
conform to the application.  Customization down to 
the deeper local levels for every bowl is too 
expensive to justify.  It's a big issue with QBE 
systems because the forms have to be integrated 
in ways that often vary from the local process. 
The local process isn't holy.  Changing it is a cost tradeoff 
with political ripples but a part of doing business.

In many ways, this is precisely why DTDs and 
Schemas are used.  Until a standard for the 
data and data types exists, it is a dicey proposition 
to create products for that market particularly 
if the data created must be aggregated in larger 
sets.  The NIBRS standard for law enforcement 
is a perfect example.  It enables us to create a 
product with a common base of information, but 
also a certain amount of customization given that 
each state plays with that format a little.  The 
actual cost that makes this very hard for the 
last mouth in the food chain (the FBI), is that 
they allowed a company to foist old style byte 
counting validation on them, so the validation 
costs are enormous and the data has to be pushed 
up from each agency, not pulled on demand. 

Were they to go to XML and schemas, we could go 
fast.  As it is, this is going to be a bottom 
up and more expensive than it has to be affair.  
It will happen contract by contract.  It will 
be "simple" and when all the "simple" solutions 
collide, it will be a mess to be cleaned up.

BTW, for the CSV fans, it won't work when the 
pipeline gets long and the systems are heterogeneous.

Len Bullard
Intergraph Public Safety
clbullar@ingr.com
http://www.mp3.com/LenBullard

Ekam sat.h, Vipraah bahudhaa vadanti.
Daamyata. Datta. Dayadhvam.h


-----Original Message-----
From: Charles Reitzel [mailto:creitzel@mediaone.net]

1) By separating DTD/schema-supplied pieces out of the Infoset you break all
the instance documents that depended on those items being *included*.  Why
would anyone use those DTD/schema features if they didn't intend this
behavior?

2) One doesn't include arbitrary DTDs or schemas into production pipelines.
You have to evaluate each such document individually for compatibility with
local processing requirements.  Anything else invites disaster.  Thus, for
the time being, the pipe dream of dynamic integration of discoverable
services is just that.