[
Lists Home |
Date Index |
Thread Index
]
From: "Michael Kay" <michael.h.kay@ntlworld.com>
> But really, when you get above 50Mb or so, you need to start looking at
> XML databases.
Another approach is to use steaming languages such as Perl and OmniMark,
(and, I guess, Python?) especially if you are not updating the data just extracting information.
Of course, you may need to take several passes. And you may need to
have one pass of the data generate a program to be used for then next
pass, a venerable technique that is often overlooked. But multiple
passes with streaming languages is the way that many large scale
publishing systems work. A lot can depend on whether your document
has an order that is amenable to your application: storing metadata
and keys before the data in particular.
A very typical way of constructing streaming programs on large
data sets is to do two passes:
1) Run over the data and extract all information that will be needed for
decisions that otherwise require random access or lookahead.
2) Run over the data and perform the extractions/analysis, using the
decision points.
Cheers
Rick Jelliffe
|