XML.orgXML.org
FOCUS AREAS |XML-DEV |XML.org DAILY NEWSLINK |REGISTRY |RESOURCES |ABOUT
OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index]
Re: [xml-dev] Got a huge XML document?

We've worked with a lot of reference works, which tend to be large, and for which we've needed to implement streaming processes (pre-XSLT 3.0 these have been awkward cranky hand-baked SAX streams).  I can't share any of the XML sadly, since it's all proprietary, but I'm always happy for an excuse to talk about them.

An example is the OED, which comes as a set of 26 XML files: the largest is S, which is about 350MB.  In total, the work is 2.6G of XML, but of course this is basically a list of 280,000-odd entries.  Still some are quite large (I think "set" is the biggest) and all have a complex internal structure (entries broken into senses which further have quotations, all of which are independently searchable entities).

Even when we deal with books with more complex global structure (like the complete annotated works of an author, or a scholarly bible with commentary), we tend to atomize them into chapters or sections or the like.  It's the only way for humans to work with them.  In those cases, it is true that the file contains essentially a sequence of chunks -- however, preserving the hierarchy is important, and this introduces a lot of complexity for streaming (especially with those nasty SAX parsers), because you want to maintain essentially all of the content of your ancestor nodes *that is not part of some other chunk* since that represents contextual metadata of interest.

Probably the largest example of a very deep nesting structure that I've worked on is the Biblioteca Teubneriana Latina, which is essentially every known classical Latin work of literary interest (it doesn't include laundry lists).  It's about 795MB, delivered in 36 files, but the first one of these incorporates the rest using XML entity references!  So we process it as a single file -- there is a tiny fragment of useful metadata in that first file.

That is broken down by a table of contents scheme that goes like: letter, author, work, and then arbitrary scheme that depends on the work, down to the individual page or line.  Even there, I think the deepest node in the TOC is about 12 levels down.    You can browse the TOC here: www.degruyter.com/db/btl, although you have to pay a lot to get the text of the entries (your library or institution might have it).

-Mike

On 9/12/2013 6:48 PM, Damian Morris wrote:
3FF5B79B-E83F-4A82-84DC-CADBD869310E@moso.com.au" type="cite"> I've got some XML in my test suite for Xmplify - individual files - that are from the WIPO, and weigh in at 360 MB...

Cheers,

Damian

--

MOSO Xmplify XML Editor - Xmplary XML for Mac OS X

t: @xmplify



On 13/09/2013, at 8:09 AM, Gareth Oakes <goakes@gpslsolutions.com> wrote:

> From: Michael Kay <mike@saxonica.com>
> Date: Friday, 13 September 2013 6:49 AM
>> On 12 Sep 2013, at 19:47, David Lee wrote:
>> 
>> In my experience, ALL Large XML files are really collections of smaller files.
>> I have never seen a single XML document of any large size that isnt simply
>> <root>
>>    <row> document 1 .... </row>
>> ..... 10 bizillion times
>> </root>
> That's certainly a very common pattern, but I've seen a few examples that
> don't quite fit it. For example, a database dump of 50 tables each of which
> fits the above pattern. Or GIS data consisting of large numbers of objects of
> a wide variety of different kinds. What does seem to be true is that as files
> get larger, it's rare for the hierarchy to get deeper.

I agree with that and wanted to share a brief note on our experience, dealing
primarily with XML that is to be printed in some format.

While XML for things like parts catalogues can get quite large, they tend to be
of the pattern of repeating sets of data. Some of the larger XML documents we
deal with (which are not "database dumps") tend to be lengthy pieces of
legislation.

While legislation can be broken down into provisions and so on, there is still
enough cross-referencing and relationships between the information to make it
tricky to break up into standalone components.

Having said that I don't think I've seen a single piece of legislation (eg.
Bill or Act) exceed 100MB in XML document size.

-Gareth




[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index]


News | XML in Industry | Calendar | XML Registry
Marketplace | Resources | MyXML.org | Sponsors | Privacy Statement

Copyright 1993-2007 XML.org. This site is hosted by OASIS