Lists Home |
Date Index |
Dare Obasanjo wrote:
> 1.) I can take a vanilla XSLT processor and pass it a stylesheet with
> EXSLT extension elements which my XSLT processor automatically learns
> how to process as valid stylesheet instructions.
> 2.) I can take a vanilla W3C XML Schema processor and pass it a schema
> with embedded Schematron assertions which it automatically learns how to
> use to validate an input document in addition to using the W3C XML
> Schema rules.
> since these are both "simple" cases of mixing XML vocabularies with
> agreed upon semantics.
> As far as I'm concerned this is an unfeasible problem to attempt to
> solve and claiming otherwise is as ludicrous as the claims many were
> making about AI in the 80s and about the Semantic Web in the 90s.
I wouldn't call those unfeasible... hard, maybe, but not impossible.
To solve it takes a few prerequisites:
1) Some way of getting code to run on anything. Perhaps fat binaries.
Perhaps a really minimal bytecode - a stack machine of some description,
maybe - that can be interpreted or compiled. Perhaps java. Whatever.
With a sandboxing mechanism.
2) Standard interfaces for, for example, schema checking systems
independent of the schema language, so one can write interchangeable
modules for XML Schema and Schematron.
3) A global registry mapping namespace URIs to bits of code that
4) Better definition of the semantics of extension. In XSLT, I imagine
that an XSLT processor might be implemented in terms of a recursive
algorithm that, alternates between a pattern matching mode and a rule
executing mode. In rule execution, it might have a big lookup table of
"xsl:for-each" and friends to decide how to evaluate each part of a
rule. In pattern matching, it might have a big lookup table of
"xsl:template" and... nothing else. So one might generalise that lookup
table into "look up the namespace URI in the global registry, check that
the returned module does indeed implement the 'Transformation'
interface, and then feed it the element name invoked along with the
transformation context and input and details of what to do with the
output etc. etc.".
5) Somebody to write those modules! Presumably this could fall to the
namespace authors - the schema for elements in the namespace and the
standard semantic declaration would go hand in hand.
Note that this isn't *forcing* semantics; it's just *providing default*
semantics. You'd still be free to parse an XSLT stylesheet and use it
to, say, produce a nice diagram of the transformation it embodies, using
your own knowledge of XSLT. The semantic modules might well only define
the semantics of those elements and attributes and extension functions
and whatnot when used for transformations. And you would be free to hard
code in your transformation engine that you know a quicker way to
implement xsl:template using some special hardware or algorithm you have
lying around, and thus avoid using the interpreted bytecode of the
official semantics, but then it's your job to make sure your semantics
matches theirs in all the areas that matter.
A renderer might have a generic layout model for rendering, perhaps the
CSS box model, and it would dispatch based upon namespaces to semantics
modules for each namespace and, as long as they support the rendering
interface, ask them to render themselves. Thus XHTML, Docbook, MathML,
and so on could all coexist happily; Docbook might implement rendering
by just applying some XSLT to itself then chaining to the XHTML
renderer. Stuff like RDF embedded in HTML might not implement the
rendering interface, in which case it would have no effect on the
display - it'd just be ignored. Other problems than rendering might take
a harsher opinion of namespaces for which an implementation of a
relevant interface cannot be found. But maybe XHTML and friends might
declare, in their semantics in the global registry, that they can be
used for 'documentation', in which case document types without explicit
documentation elements might just allow elements from their namespaces