[
Lists Home |
Date Index |
Thread Index
]
> I agree - particularly for deep analysis of data with multiple
> overlapping hierarchies. Much of what I have seen here involves
> biblical
> texts or linguistic data.
Those are the typical examples, because they're driven by linguistic
structures, but examples abound... like for example being able to
concurrently treat an address as a string, or just extract zip codes,
or to be able to automagically generate markup for names, sentences,
etc. and to synthesize structures around them to aide in processing.
The point is to really step outside the boundaries of inline markup as
we typically think about it, and to think of the *text* in the context
of markup, both explicit and implicit.
XML implies a certain processing model... one which many people (IMHO
wrongly) think involves being "correctly typed", which is where the
difference lies. One model espouses uniformity, commonality, and
"correctness", and the other the ability for the reader to interpret a
document as they see fit, and to modify their view, *without* altering
the original text.
Having experimented quite a bit, I've come to be sympathetic with Ted
Nelson's view. Even though Xanadu never saw the light of day, many of
the ideas are perfectly valid, and technologically, a closed system
like that is quite "doable".
|