[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: should all XML parsers reject non-deterministic content models?
- From: "TAKAHASHI Hideo(BSD-13G)" <email@example.com>
- To: Daniel.Veillard@imag.fr
- Date: Mon, 15 Jan 2001 11:15:14 +0900
Thanks for all the information. It helped me a lot.
Now I can safely say that the parser we have implemented is compliant,
but I have to remind the users to be careful writing content models.
Okay I will do that.
To me it seems nice to see some future version of the XML spec to demand
XML processors the ability to handle non-deterministic content models to
be compliant with some named version of the spec. Then it would be
easier for me to explain to users. I can simply recommend them to make
sure they buy (or download :-) processors that are compliant to that
certain spec, to ensure interoperability. Yes there is surely much more
to be said beyond XML spec compliance to ensure interoperability between
systems, but at least it is one of the first few steps.
For the meantime it might be nice for DTD authoring tools to have
checking functionality for non-deterministic content models. Given such
tool you can say that the user's DTD will work on any XML 1.0 compliant
processor, as long as your tool says it's ok. The user doesn't have to
understand automata theory (, which may be fun for some but painful for
others). Since algorithmical conversion from non-deterministic content
models to deterministic ones seems possible (for most cases?), the tool
might even suggest re-writes. Does any existing tool do such things
(detect and/or re-write) ?
Daniel Veillard wrote:
> On Sun, Jan 14, 2001 at 05:42:16PM +0700, James Clark wrote:
> > Daniel Veillard wrote:
> > > On Sun, Jan 14, 2001 at 04:42:55PM +0900, TAKAHASHI Hideo(BSD-13G) wrote:
> > > > Hello.
> > > >
> > > > I understand that the XML 1.0 spec prohibits non-deterministic (or,
> > > > ambiguous) content models (for compatibility, to be precise).
> > >
> > > Note also that this is stated in a non-normative appendix.
> > It is also stated normatively in the body of the spec
> > (http://www.w3.org/TR/REC-xml#sec-element-content): "For compatibility,
> > it is an error if an element in the document can match more than one
> > occurrence of an element type in the content model."
> Oops, right, I missed this sentence, sorry.
> > > In practice this is a very good rule because it allows to simplify
> > > the validation of a content model a lot.
> > Complicating things for the user to make things simpler for the parser
> > writer seems in general a bad trade-off to me. Also this decreases
> > interoperability (as you've observed). The only justification is
> > compatibility with SGML.
> I'm more convinced by the agument that some content models may
> not be expressable in an 1-unambiguous languages. But I'm still
> a bit worried that either way interoperability will be a concern.
> In this case compatibility with SGML generates risk of interoperability
> problems between XML tools, "results are undefined" doesn't sound good
> > In fact, there is a very simple algorithm available that handles
> > non-determinism just fine (it doesn't require you to construct a NFA and
> > then do the subset construction). See
> > http://www.flightlab.com/~joe/sgml/validate.html (TREX uses a variation
> > on this).
> Thanks for the pointer !
> > The thesis proves the opposite: that there are some regular expressions
> > that do not denote 1-unambiguous languages (see p52). She gives the
> > example of
> > (a|b)*,a,(a|b)
> Hum, right, this makes sense and is also stated in Appendix E.
> thanks a lot,
> Daniel Veillard | Red Hat Network http://redhat.com/products/network/
> firstname.lastname@example.org | libxml Gnome XML toolkit http://xmlsoft.org/
> http://veillard.com/ | Rpmfind RPM search engine http://rpmfind.net/