[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: Are we losing out because of grammars? (Re: Schemaambiguitydetection algorithm for RELAX (1/4))
- From: Rick Jelliffe <ricko@allette.com.au>
- To: xml-dev@lists.xml.org
- Date: Mon, 29 Jan 2001 21:11:01 +0800
From: Bullard, Claude L (Len) <clbullar@ingr.com>
>So essentially, if one has a reasonably large system
>that must either push or pull data from agency to
>agency, tool to tool, one should expect now and
>in the future to write and standardize multiple descriptions of
>that data to enable validation of both co-constraints
>and grammar?
I think that will always be the case, but for reasons that don't spring from
the schema language.
I think there are two kinds of schemas and therefore schema languages: one
tries to express what is true of all data of that type at all times (e.g.
for storage and 80/20 requirements) and another tries to express the things
that make that particular information at that particular time and context
different from other data of the same type. One tries to abstract away
invariants, the other tries to find abstractions to express these
variations.
The first kind is a map, the second kind is a route. The first kind is good
for automatically generating interfaces and for coarse validation, the
second kind is what is required for data-entry and debugging all data at
all. (As for the status quo, I don't believe XML Schemas and DTDs pay much
or any attention to this second kind of schema: maybe TREX and RELAX do a
little bit and I hope Schematron is closer to the other end of the
spectrum. )
Cheers
Rick Jelliffe