[
Lists Home |
Date Index |
Thread Index
]
- From: "Thomas B. Passin" <tpassin@home.com>
- To: xml-dev@lists.xml.org
- Date: Fri, 10 Nov 2000 10:55:39 -0500
Simon St.Laurent remarked -
...
> At this point, I have a hard time accepting the line drawn
between
> validating and non-validating parsers, or the
justification for making all
> non-validating parsers understand and process whatever
DTDs they happen to
> encounter. It seems it would have been wiser to make
non-validating
> parsers behave consistently, either by always reading all
of the DTD
> content or by ignoring it entirely. I spent a long time
preferring the
> first option, but at this point I'm leaning toward the
second.
>
> As fond as I have been of DTDs (believe it or not), I
think it's well past
> time to extract them from the initial parsing process, and
make them a
> post-processing tool, something like schemas. The
document contains
> whatever it contains, and DTD or schema processing is
considered an
> addition to the document, not content at the same level as
the actual
> document content.
>
Isn't it true that, in SGML, the DTD with its regular
grammar is (can be used) to create a parser specialized for
the particular DTD - perhaps even on the fly when the
document is read? Yet xml seems to have been designed to
avoid the need for a customized parser. We use the same
parser for all xml documents, the parser (presumably?)
doesn't redesign its finite-state machine to fit the DTD.
If this is true, it strongly supports Simon's suggestion.
Comments, anyone? Especially parser-writers?
Cheers,
Tom Passin
|