[
Lists Home |
Date Index |
Thread Index
]
"Simon St.Laurent" <simonstl@simonstl.com> wrote:
| DTDs emerge from an understanding of what markup does, but both do too
| much (infoset augmentation)
Well, that's if one assumes there is a "the infoset" to be so augmented.
I think John Cowan once clarified that the Infoset Rec actually specifies
only "an infoset", and not in any way "the infoset" in some normatively
exclusive sense (though "derivative" specs of late seem quite eager to
treat it so).
| and too little (modularization is an interesting challenge.)
Actually, that isn't a problem with DTDs as much as it's a problem with
the (implicit) validation model. That is, if you assume that a DTD will
be comprehensive about a document (an "encompassing architecture" to the
HyTime folks) then modularization is a definite challenge. Of course,
DTDs were originally developed with comprehensiveness in mind only, but
it's possible to relax the default scope and apply particular DTDs to only
parts of a document (as in "enabling architectures"). For example, RNG
can take a "maximal fit" rather than a "complete fit" view of validation.
Similarly, it's possible to assume the moral equivalent of (#DONTCARE) as
the content model of some elements, and thus delegate subtree validation
constraints to other DTDs.
That said, *XML* DTDs are utterly crippled in relation to SGML DTDs, and
even those lack expressive power in some areas.
| [...] while DTDs do too little, and extending them requires a lot of ad
| hoc work.
If you mean things like PE games to shoehorn colonified names, that's a
colossal waste of time and energy indeed.
|