Lists Home |
Date Index |
- To: "The Deviants" <email@example.com>
- Subject: Re: [xml-dev] misprocessing namespaces (was Re: [xml-dev] There is a meaning, but it's not in the data alone)
- From: Tim Bray <firstname.lastname@example.org>
- Date: Sun, 03 Feb 2002 20:49:34 -0800
- In-reply-to: <email@example.com>
- References: <firstname.lastname@example.org><4XOKTPSQ3W1VLJ9607AZVUSBAUOYX05.3c586905@MChamp><email@example.com><3C589A61.95139DF@prescod.net><firstname.lastname@example.org>
At 12:01 AM 03/02/02 -0500, Elliotte Rusty Harold wrote:
*If* there's ever an XML 2 - and that's a long shot - one of
>>the #1 requirements would be nuke entities, I think. -Tim
>I must say I'm surprised to hear that, especially coming from you. Just for curiosity's sake, would you mind elaborating on your reasoning? Personally, I've often thought unparsed entities were on the wrong side of the 90/10 divide, but parsed entities seem quite useful.
Actually, I have no trouble with unparsed entities, except the
web seems to get by just fine without the extra level of
indirection they buy you.
I also have no big problem with parameter entities, they stay
off in the DTD where ordinary people and run-time code don't have
to deal with them.
But general parsed entities... yecch. Doing content aggregation
at the lexical level feels wrong. They cause all sorts of
baroque complexity in APIs. Non-validating parsers don't read
them. They cause all sorts of complexity for ID/IDREF management,
and they complicate namespace processing horribly. They are
totally aimed at document/publishing applications of XML, and
in my experience, they don't work that well there. Eliot Kimber
was telling us 8 years ago that they were basically broken and
we weren't smart enough to realise he was right.
I think xml:include is probably a better stab at a solution
to the problem, but I also think we don't have enough experience
to know how big/important the inclusion problem is, and what the
right answer to it is. -Tim