Lists Home |
Date Index |
- From: Chris Smith <firstname.lastname@example.org>
- To: email@example.com
- Date: Thu, 11 Dec 1997 03:00:11 -0500 (EST)
I'm part of a group that has decided to use XML as an encoding for
documents which are effectively carrying transactions. Seeing XML make
it to Proposed Recommendation is great, and makes our decision less of
Part of this work requires that these documents carry document
authentication information. This, in turn, requires that some regions
of an XML document must be transported *exactly*, and must be received
and checked identically so that the message authentication actually
works. That fact that we are considering the idea of including email
as a transport mechanism doesn't help matters.
There are two questions at hand, largely directed at those creating
parsers. I'd like to know if the application requirements we are
proposing ("what to do with the document") are going to be incredibly
difficult to manage, given what the parsers are providing. I confess
I'm just getting started here - I will get to investigating the
various parsers. For now the questions may be useful anyway.
The first criteria is that message authentication is applied to an
element in the document. This is a start to precisely defining what is
being checked. The second criteria is that the message authentication
must be applied to the XML document as represented in UTF-16 encoding,
with big-endian convention, AS IT IS WRITTEN. This is to prevent us
having to specify a consistent *internal* representation. The XML spec
itself helps define a consistent *external* representation, which we
figure is easier to stick with than dealing with all the
cross-platform issues. The question: can this readily be dealt with?
Is it straight-forward to ask for MessageAuthentication over
<element>...</element>, with all the content included?
The second question is much less firm right now. We would like make
whitespace handling robust - if someone along the way uses a tool
which breaks a line, we should be able to fix it rather than die.
If we add the following character entities to our DTD,
<!ENTITY spc " ">
<!ENTITY tab "	">
<!ENTITY cr " ">
<!ENTITY lf " ">
then it should be possible to use these to represent 'wanted'
whitespace, and thus allow for a simple rule prior to checking message
authentication - that is, remove all 'native' space, tab, LF, and CR
from the #PCDATA and check what remains (whitespace inside tags is
handled in a more draconian fashion). (According to the previous
section, "Hi&spc;there!" will be checked exactly that way you see it
here - not as "Hi there!" The question? - is this distinction (between
eg the native 0x0009 and &tab; (which converts to 0x0009) going to be
difficult to keep track of?
Chris Smith <firstname.lastname@example.org>
xml-dev: A list for W3C XML Developers. To post, mailto:email@example.com
Archived as: http://www.lists.ic.ac.uk/hypermail/xml-dev/
To (un)subscribe, mailto:firstname.lastname@example.org the following message;
To subscribe to the digests, mailto:email@example.com the following message;
List coordinator, Henry Rzepa (mailto:firstname.lastname@example.org)