OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.


Help: OASIS Mailing Lists Help | MarkMail Help

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index]
Re: [xml-dev] XPath and a continuous, uniform information space

On Thu, Aug 15, 2013 at 8:43 PM, David Lee <dlee@calldei.com> wrote:

This discussion is very interesting ... and joins well with what I have been thinking and discussing and learning the last few months.

XML , HyperMedia, RDF , Web, InfoSpace

Trying to be concise ... (ha!)

In this post you actually do a decent job of asking a question and providing its own answer.  You start by asking why XML URI access mechanisms do not define the mechanism of retrieval. You then go on to list some of the many pitfalls involved in retrieval.  These pitfalls are of course well known, and are the reason why XML specs leave all that to the relevant RFCs, and why they absolutely should do so.

A couple of points, though:

There is this fuzzy space where XML doesnt exactly define how to resolve references to documents, nor does it supply a unique document ID.

( I am not quite sure of the later ... is doc("x" ) == doc("x") ? ) 

Yes.  From the XSLT 1.0 spec, 12.1:

Two documents are treated as the same document if they are identified by the same URI. The URI used for the comparison is the absolute URI into which any relative URI was resolved and does not include any fragment identifier. One root node is treated as the same node as another root node if the two nodes are from the same document. Thus, the following expression will always be true:


How do we make a Web based InfoSpace that is reliable ... without having to have a local copy of it all ?

We don't.  That would be taking a backward step from the Web to Project Xanadu.  Ted Nelson is a genius, but there are good, non-genius reasons why the Web succeeded where Xanadu never could.  You give those reasons in your very next para:

I suggest the web works today because of *humans*.   It is us humans that clickityclick the links and handle failover ... "Hmm google is down try yahoo",

Yes.  This was the Magna Carta of the history of the Web, courtesy Tim Berners-Lee.

Uche Ogbuji                       http://uche.ogbuji.net
Founding Partner, Zepheira        http://zepheira.com

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index]

News | XML in Industry | Calendar | XML Registry
Marketplace | Resources | MyXML.org | Sponsors | Privacy Statement

Copyright 1993-2007 XML.org. This site is hosted by OASIS