[
Lists Home |
Date Index |
Thread Index
]
On Sunday 20 January 2002 02:19 am, Tim Bray wrote:
> It's a website that has a form with one argument, which is
> the URL of some document. It reads the document, pulls out
> the namespace URIs, and for each one goes and sees if there's
> a RDDL. Then it produces a nice outlined analysis of the
> document, showing all the namespaces that apply to various
> parts of it, and offering to perform various schema validations,
> stylesheet-driven output generation tasks, or various other
> useful things, based on the resources out of the RDDLs; the
> xlink:title attributes on rddl:resources would be useful in
> generating this outline. Also for each namespace, allow you to
> click to get the human-readable info on that namespace. It would
> need to include a bunch of different schema validation engines
> and some rendering engines, but there are lots of those around.
> Seems to me you could cook this up, using freeware tools and
> either perl or python, in a couple of days.
This could be a web service too.
I was musing this morning, wondering how useful this all really is.
What percentage of web sites are ever going to have RDDL or reasonable
metadata? What percentage is necessary to support inference through
metadata propogation over links? My guess is that unless 15% or so of
web sites don't do this, the overall impact will not be great (no
network effect that would be noticeable). Then again, perhaps within
the community that *does* do it, things will be much better.
I'm just thinking that so many things that are obviously good ideas,
are never used by the population at large...
|