Lists Home |
Date Index |
- From: "Thomas B. Passin" <firstname.lastname@example.org>
- To: email@example.com
- Date: Fri, 10 Nov 2000 10:55:39 -0500
Simon St.Laurent remarked -
> At this point, I have a hard time accepting the line drawn
> validating and non-validating parsers, or the
justification for making all
> non-validating parsers understand and process whatever
DTDs they happen to
> encounter. It seems it would have been wiser to make
> parsers behave consistently, either by always reading all
of the DTD
> content or by ignoring it entirely. I spent a long time
> first option, but at this point I'm leaning toward the
> As fond as I have been of DTDs (believe it or not), I
think it's well past
> time to extract them from the initial parsing process, and
make them a
> post-processing tool, something like schemas. The
> whatever it contains, and DTD or schema processing is
> addition to the document, not content at the same level as
> document content.
Isn't it true that, in SGML, the DTD with its regular
grammar is (can be used) to create a parser specialized for
the particular DTD - perhaps even on the fly when the
document is read? Yet xml seems to have been designed to
avoid the need for a customized parser. We use the same
parser for all xml documents, the parser (presumably?)
doesn't redesign its finite-state machine to fit the DTD.
If this is true, it strongly supports Simon's suggestion.
Comments, anyone? Especially parser-writers?