[
Lists Home |
Date Index |
Thread Index
]
Thanks Rick, you have helped me realise that we currently do something
similar to your example below in that when we extend a 'standard' type we do
so by identifying all of our 'non standard' aspects within our own namespace
and leave those which we used from the standard as that. As you point out,
this makes it clear which bit are standard and which are not.
However we also add whole new types to the standard schema. In our case
(insurance), this might mean that there is a standard product which allows
for 'standard covers' (say household which allows standard buildings and
contents covers), but we want to add a whole new cover (say travel). Now the
standards body may well have a standard definition for travel cover and if
so, we would want to use it. But the whole transaction schema is now
non-standard (because of the additonal cover) so at present we declare the
whole schema to be within our own namespace, that is, rather than
xmlns="www.insurancestandards.com/2005/10/householdquote" we might use
xmlns="www.myorg/2006/02/householdquote". What we then find quite difficult
is determing which bits of our schema are different from the standard
particularly when the standard changes and we want to consider the impact.
So maybe we should try and keep all of the types which are part of the
standard more easily identifiable, maybe something like :-
<my:householdquote xmlns:my="www.myorg/2006/02/householdquote"
xmlns="www.insurancestandards.com/2005/10/householdquote">
<!-- standard stuff -->
<contentsCoverStuff>
....
</contentsCoverStuff>
<!-- buildings - some std some not -->
<buildingsContentsStuff>
<!-- std -->
<sumInsuredAmount>1000000</sumInsuredAmount>
<!-- non std -->
<my:subsidenceCheckIndicator>false</my:subsidenceCheckIndicator>
<buildingsContentsStuff>
<!-- non std for this transaction - but definition available from stds
body -->
<travelCover>
....
</travelCover>
<!--non std for this transaction - no definition available -->
<my:superCover>
...
</my:superCover>
</my:householdquote>
What do you think ?
Fraser.
>From: Rick Jelliffe <rjelliffe@allette.com.au>
>To: Fraser Goffin <goffinf@hotmail.com>
>CC: xml-dev@lists.xml.org
>Subject: Re: [xml-dev] Validation - Is it worth it ?
>Date: Mon, 13 Feb 2006 15:44:42 +1100
>
>Fraser Goffin wrote:
>
>>>Not many systems meet those criteria.
>>
>>
>>Agreed (as stated).
>>
>>But the conundrum (if there is one) can also be viewed as :-
>>
>>a. I want to exercise [some] control over messages that arrive at the
>>service interface so that I have a reasonable degree of confidence that
>>the message is 'business processable' and that (as you say) there is
>>confidence that exploitation of the service will be caught by the business
>>rules that provide the actual implementation.
>>
>>b. I want consumers to call my service (I want to do business)
>>
>>c. I want to minimise message rejection where (a) is true.
>>
>>d. I want calling my service to be EASY for consumers (re: Tim Ewald's
>>notion of 'make it easy for people to pay you ' :-).
>>
>>e. That the service is not just a slave to a technical contract which is
>>actually expressed more strictly than the actual business process.
>>
>>Sometimes I think about this as the difference between 'compatible'
>>messages versus those which strictly adhere to a technical specification.
>>Perhaps its really to do with how closely that specification mirrors the
>>business requirements for the service that 'we' want to implement (given,
>>as I said in my original post, that the contract is owned by an external
>>body) ?
>
>I think the issue of whether an accepting system should be liberal in what
>it accepts needs to be counterbalanced against the dangers of encouraging
>public exchange of documents that advertise that they belong to a standard
>document type, yet don't follow the minimum rules for that type. Standards
>get their benefit from reach; incomplete subsets that are OK with you but
>which may omit information needed by others ruins this reach; too many
>local 'optimisations' cause global non-optimality.
>
>It is just a question of expectations: the documentation for your interface
>just needs to say "we accept any documents that accord to the standard
>schema, however we only use information according to this lesser schema"
>(where the lesser schema is the standard schema with everything you don't
>require made optional).
>
>Another choice, when you have private exchange but want to use standard
>schemas as much as possible but you don't want extraneous fluff, is to have
>your own top-level elements, which then include whichever bits of the
>standard schema as needed. That is good markup practise: it makes it clear
>what is standard and what is non-standard. You might even have the same
>names but different namespace.
>
>E.g. if the original schema requires
>
><standard:person>
><standard:name>...
><standard:address>...
><standard:shoe-size>...
></standard:person>
>
>and you don't want the required shoe-size, use
>
><my:person>
><standard:name>...
><standard:address>...
></my:person>
>
>[PRODUCT PLACEMENT] You might like to look at Topologi's Interceptor
>product at
>www.topologi.com. This is a proxy servlet that intercepts incoming POX
>data and tests, validates and dispatches the documents. In your case, for
>example, you could have Schematron validation of your busines requirements,
>but as a nice clean layer from the processing servelet.
>
>Cheers
>Rick Jelliffe
>
>-----------------------------------------------------------------
>The xml-dev list is sponsored by XML.org <http://www.xml.org>, an
>initiative of OASIS <http://www.oasis-open.org>
>
>The list archives are at http://lists.xml.org/archives/xml-dev/
>
>To subscribe or unsubscribe from this list use the subscription
>manager: <http://www.oasis-open.org/mlmanage/index.php>
>
|