[
Lists Home |
Date Index |
Thread Index
]
- To: "Liam Quin" <liam@w3.org>,"Derek Denny-Brown" <derekdb@microsoft.com>
- Subject: RE: [xml-dev] Hostility to "binary XML" (was Re: [xml-dev] XML 2004 weblog items?)
- From: "Dare Obasanjo" <dareo@microsoft.com>
- Date: Mon, 22 Nov 2004 13:52:31 -0800
- Cc: <xml-dev@lists.xml.org>
- Thread-index: AcTQ2xQFo8EIGAaKRAqtsLO4r9nGNQAAeTcA
- Thread-topic: [xml-dev] Hostility to "binary XML" (was Re: [xml-dev] XML 2004 weblog items?)
> -----Original Message-----
> From: Liam Quin [mailto:liam@w3.org]
> Sent: Monday, November 22, 2004 1:34 PM
> To: Derek Denny-Brown
> Cc: xml-dev@lists.xml.org
> Subject: Re: [xml-dev] Hostility to "binary XML" (was Re:
> [xml-dev] XML 2004 weblog items?)
>
> On Mon, Nov 22, 2004 at 01:09:06PM -0800, Derek Denny-Brown wrote:
> > Most of the CPU cost of parsing is related to the abstract model of
> > XML, not the text parsing: Duplicate attribute detection, character
> > checking, namespace resolution/checking. Every binary-xml
> > implementation I have researched which improves CPU
> utilization does
> > so by skipping checks such as these. At that point you are
> no longer
> > talking about XML.
>
> One can do validation in the writer and then plausibly skip
> the sort of checks you mention in a reader, and still be
> talking about XML, even with today's textual interchange formats.
Interesting, so if I'm writing an XML Web Service end point I should
trust third parties to do the well-formedness and validity checking as a
"performance enhancement"? Even ignoring the security implications of
this it still seems like a horrible idea.
--
PITHY WORDS OF WISDOM
It is impossible to make anything foolproof because fools are ingenious.
This posting is provided "AS IS" with no warranties, and confers no
rights.
|