[
Lists Home |
Date Index |
Thread Index
]
Len,
I find your response to Bruce Cox questions at odds with practice we are
implementing in OASIS technology specifically to fix the train wreck that the
W3C has created with XSD and XML.
o Schemas IMHO do not scale. That's the entire problem. When people attempt to
apply them in multiple arenas the only recourse they have is to make every
single field and attribute OPTIONAL! Do not believe me? Look at what OAGi had
to do with their BODs. And just in case you think this is a rare instance -
then consider what happened when they created a BOD with 10,000 elements in it
(yes - they did this) - and noone can figure out how to use it. The FIX
protocol people are going thru the same nightmare right now too - same metrics
- thousands of people trying to use a common standard worldwide.
Basically there is nothing new here - just because its XML. We've known about
this problem for 20 years - with the work in EDI preceding all this.
So - what are the answers? First you have to manage context as a central focus
of your integration - starting from the business process definitions itself,
the business partner agreements, and then thru to the assembly of content and
then finally the on-the-wire transactions. This is the lesson learned from the
original ebXML work - the context management was weakly covered only in CCTS.
Now with the new ebXML and particualarly the BPSS V2 and OASIS CAM work - you
can fully manage context and create and ebContext instance - and associate this
with your partner CPAs.
So what does this do for schemas?
It allows you to create CAM templates that implement the level of richness both
in business rules and information semantics that schema is devoid of.
Does this mean you have to toss away schemas? No! It means you augment
schemas
with CAM templates and then finally you have the means to share consistent
integration information across a global community of interest.
Each participant can set their own local context information as needed.
So - every time you create a schema - the next step is to create a CAM template
that documents the actual usage patterns and rules. Fortunately this is very
easy to do for anyone that is XML literate. It also means you can have very
much more relaxed schema design rules - since the CAM template allows you to
point to and reference common vocabularies - again - something else you cannot
do with schema out-of-the-box.
You can find a tutorial and tools for using CAM at the OASIS CAM website:
http://www.oasis-open.org/committees/cam
Thanks, DW
p.s. Note to Bruce - yes - CAM does allow you to specify patterns at the field
level - so you can formulize your patent numbering schemes and have those
validated.
<<<<<<<< Len Bullard wrote >>>>>>>>>>>>>>>>
It might be fair to say without regard to technology:
o Schemas are made as context independent as possible
to enable them to be applied in more contexts (they scale)
o Business rules are contexts (semantics don't scale
without force being applied. Use of force is itself,
a context-dependent operation with rules).
Value-focused Thinking methods might be applicable
here. One needs to identify fundamental objectives
that are independent. Dependencies among these
indicate hidden objectives (undiscovered) or that
a means objective (think, sub-goal) has been misapplied
as a fundamental objective.
len
From: Cox, Bruce [mailto:Bruce.Cox@USPTO.GOV]
Are business rules semantics? I take that question to mean that some
business rules can be fully automated since they are about properties of
data that succumb to, for example, XML Schema data typing, while others
are more problematic and may require methods not easily automated. In
the case of patent document numbers, the goal would be to "ensure shared
data is recognized" and only then processed for the current purpose.
I certainly appreciate the benefit of using DTDs with their lack of
content validation. Without that characteristic, it is unlikely that
the patent offices of the world would have agreed on a common vocabulary
for patent applications and publications. Now that we are on the verge
of exchanging instances internationally, that characteristic may bite us
by impairing interoperability due to significant variances between the
start and end tags for any given element.
Document numbers are a special case, in that they are critical to
establishing the relationship among patents filed and granted in
different countries. Accuracy is sufficiently important to be spending
millions of USD a year to correct bad numbers provided by applicants or
other offices. In this one case, I hope there is some way to express
the validation rules independently of custom code so that we can
describe the rules to each other unambiguously and implement them
consistently.
With XML Schema data typing, followed by Schematron, what would come
next to cover the residue? I don't think anything to do with document
numbers can't be automatically validated.
----- End forwarded message -----
|