Hi Folks, This is a fantastic discussion. Let me summarize the key ideas being discussed. Please inform me of the parts that I misunderstand. 1. We need an info space.
An info space is a distributed collection of data. The data that constitutes an XML “document” may be scattered across the Web, but the user has no knowledge
or awareness of this, he/she just experiences an XML document. 2. We need domain-specific navigation axes. Today we just have generic navigation axes. We can navigate through XML documents using axes such as child, parent, descendant, ancestor, etc. However, those
axes require understanding of where the data is physically located. For example, “Hey, the data is in the child element, so I will navigate to it using the XPath child::___ axis.”
In an info space you don’t know where the data is physically located, so the generic navigation axes are useless.
We need domain-specific axes. For example the vacant-rooms::___ axis is a domain-specific axis used to navigate to all the <vacant-room> elements, wherever
they may physically reside in the info space. 3. We need to be able to define super structures that integrate diverse bits of data into a single logical structure. Forget the current notion of documents, i.e., a physical file sitting on your hard-drive. In the info space such documents don’t exist. The data that makes
up an XML “document” may be scattered far and wide. To give the user the experience of a “document” we need to be able to define a super-structure layer on top of the scatter bits of data. Am I missing any key concepts?
/Roger
From: Peter Hunsberger [mailto:peter.hunsberger@gmail.com]
As I followed much of this discussion it struck me as a bit document centric, so I absolutely agree; "nodes on demand"... I have a project I'd like to embark on if I can find the time, which is driving something
such as Saxon directly from a graph database. Ultimately, I want graph traversal and graph composition. I see tools such as xPath and XSLT as pretty good competition for some of the current ways to do that.
Peter Hunsberger
On Thu, Aug 15, 2013 at 10:46 AM, Michael Kay <mike@saxonica.com> wrote: Come to think of it, perhaps the problem is more that we equate an "XML document" to a "web resource". What we perhaps need is a way of distributing a single XML document over a large collection of web resources,
and then navigating around that XML document seamlessly, using XPath? Of course we can do that crudely already, using entities or XInclude. Perhaps we just need a smarter implementation of transclusion, where the document fragments are fetched on demand when XPath navigation needs
them, rather than being assembled eagerly by the XML parser. Michael Kay Saxonica On 15 Aug 2013, at 16:11, Uche Ogbuji wrote:
On Thu, Aug 15, 2013 at 12:39 AM, Hans-Juergen Rennau <hrennau@yahoo.de> wrote:
Well this *is* one area in which the Web has led a lot of experimentation, and I would say that the results have not been encouraging. Attempts to foster additional key-lookup-like resource access (i.e. URI schemes)
have gone nowhere (even DOI, as widely used as it is I think only proves that if you try to set up an alternative to HTTP URIs, people will simply re-layer HTTP URIs back on top of that alternative). I do agree that some sort of abstract index mechanism, shareable across Web resources, would be a Very Good Thing, as you and Michael have variously suggested, but the first step is probably in puzzling out why
others failed (the Tag URI scheme seemed promising, but never really worked out). -- |