[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
RE: Are we losing out because of grammars? (Re: Schema ambiguitydetection algorithm for RELAX (1/4))
- From: "Bullard, Claude L (Len)" <firstname.lastname@example.org>
- To: James Clark <email@example.com>, Rick Jelliffe <firstname.lastname@example.org>
- Date: Sun, 28 Jan 2001 16:48:54 -0600
So essentially, if one has a reasonably large system
that must either push or pull data from agency to
agency, tool to tool, one should expect now and
in the future to write and standardize multiple descriptions of
that data to enable validation of both co-constraints
Consider, these are some very long pipelines from local
agencies to say, government repositories. Data modelers
are part of this problem, but at the more serious level
of costing delivery and calculating the dependencies
of the processes on the validity of the data, the modeler
may be a trivial part of the cost. It is like the
DePH for XML; a myth by which we simplified by not
really the model for the cost.
Intergraph Public Safety
Ekam sat.h, Vipraah bahudhaa vadanti.
Daamyata. Datta. Dayadhvam.h
From: James Clark [mailto:email@example.com]
Joe English said pretty much everything I wanted to say. Just one
Rick Jelliffe wrote:
> doesn't the presence of these tricky ambiguity issues
> mean that to actually understand RELAX (and presumably certain other
> languages) requires a computer scientist not a data modeler?
If you're using RELAX for validation, it doesn't have any ambiguity
issues. In this regard it is the same as TREX. The ambiguity issues
only arise if you try to use it to "interpret" the document (that is
augment the information in the document by assigning each element or
attribute a label corresponding to some rule in the schema). If you
just stick to validation, there's no issue.
I would agree with the sentiment that it's bad to inflict tricky
ambiguity issues on data modelers. Fortunately this is not inherent in
using grammars for validation.