OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.


Help: OASIS Mailing Lists Help | MarkMail Help

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index]
Random Access XML

 Another name for simplification which doesn't enable extra capabilities 
 is "dumbing down". So how could XML be "simplified" in a way that 
 doesn't dumb it down?

 Providing the capability to scan an XML document, starting at any 
 arbitrary point, and reparsing from that point would open up some 
 different strategies for using it.

 Lets make the goal that we want to be able to open a document in which 
 every element name is unique and with a known order, and to be able to 
 locate any arbitrary element using a binary chop mechanism. Not all 
 documents are like that, but many documents have unique elements in a 
 known partial order, so the goal is not so odd.

 What language features would allow this?

 1) For a start, we need to be able to know whether "<" "</" and ">" are 
 tag delimiters without knowing context. So we must ban direct use of "<" 
 and ">" in attributes and also get rid of CDATA sections. We should get 
 rid of comments and PIs too, for the same reasons. (Actually, we only 
 need to ban comments and PIs from after the first start tag. For other 
 reasons, we might like to treat the first start-tag and before it 

 This lets us leap in anywhere in a document and safely stream back or 
 forwards till we find a tag.

 2) Next, we need to be able to know what namespace prefixes are in 
 scope. So we need to put all namespace declarations in the first 
 start-tag, and no-where else. So no namespace rebinding.

 This allows us to know what the element or attributes namespace is, 
 that we are looking at, while doing random access.

 Just those two things allows the goal, I think.

 NOTE: I am taking it for granted that there is no DTD and any system of 
 entity declarations allows CDATA entities only: special characters not 
 tags. Indeed, if every entity reference expands to the same or fewer 
 characters than the refence string (e.g.  "&amp;" expands to "&") then 
 entuty expansion can be done in-place (padded by nulls of some kind?) 
 which reduces the buffer requirements.

 What if we extend that goal, and we want to open a document where every 
 element is unique in its parent (so x/y and z/y but not a second 
 occurrence of x/y),  and do the binary chop?

 I think we'd need 1) and 2) above, but we would also need a third 
 feature, which would require an extension to XML.

 3) The generic identifier would have to be more like an XPath.

     <book/title>It was a dark and stormy night</book/title>
     <book/section id="s1">
         <section/title>Chapter 1</section/title>

 Actually, the number of location steps in the GI would match the extent 
 of uniqueness of the name. So you would only need to have long paths for 
 element GIs which were reused in different context, *AND* which were 
 thereby enabled to be searched for in this fashion.

 Now what feature would we need to allow a binary chop to find any 
 element by position?

 4) Lets define an attribute for ordinal position:
     <book/section xml:ord="23" > ...

 This doesn't allow location of deep arbitrary non-unique elements: to 
 find book/section[23]/p[26]  you would first have to find section[23], 
 then scan forward to p[26] (or scan forward for </book/section> then 
 chop for p[26], or scan forward for book/section[24] then chop between 
 them for p[26] or whatever).

 The .NET XML API got quite approving noises from James Clark IIRC 
 because of its nice scan-forward features, but I think these could be 
 improved with a bit of massage of XML, to support random access better.


 Rick Jelliffe

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index]

News | XML in Industry | Calendar | XML Registry
Marketplace | Resources | MyXML.org | Sponsors | Privacy Statement

Copyright 1993-2007 XML.org. This site is hosted by OASIS