[
Lists Home |
Date Index |
Thread Index
]
Ok. If you have a schema, you have the description
of the paths although combinatorial explosion can
still occur, but since all you are doing is using
it to create a GUI, that's fine. It spits out a
path(s) based on your selections.
You want the document indices. Indexing the documents
is a different job and, although I'm no expert,
indexing XML documents for optimized searching is
still a dark art. Hopefully given XQuery and
Extreme Markup and XML 2005, we'll see more papers
soon. Given all the work being done with XML
databases, that's a certainty. So the problem
remains indexing the text nodes and if you are
really ambitious, the contents of the notations
(eg, index a photo by its contents). Work on
that last bit is getting a lot of attention
these days because otherwise, systems such as
are going into transportation, ports, hospitals,
etc, aren't as useful. We do that here (Video Analyst).
DeRose makes me humble. He is much smarter than
me and a much better skater. He is also quite
generous with his knowledge given proper respect.
Anyone that can make heads hurt on this list
deserves it. His papers on these topics from
the late 80s are seminal.
len
From: Robert Koberg [mailto:rob@koberg.com]
Bullard, Claude L (Len) wrote:
> Loading all of that from all of the docs is like searching
> all of the available tables to get that info. Doable but
> not for the faint of resources. Add full-text to that and it
> becomes a job for Google farms. How well would Google
> work if they weren't cacheing the web?
I guess this is my point. If you indexed your tables with Lucene and
searched on *that* index you would use much less resources and it would
be much faster. But I was talking about XML, which if search required
bringing everything into DOMs, it would be much slower and much more
resource intensive than a RDB.
As for Steve DeRose's stuff -- my brain hurts from reading it...
|