[
Lists Home |
Date Index |
Thread Index
]
Thank you, Alaric.
Finally, someone thinks architecturally, that is,
systematically, which is the point of the symbol
grounding article: one cannot ground symbols without
a systematic means for compositing the primitives of
the symbol set into meaningful statements where
meaningful, in our case, is running code. Note
also that the article clearly delineates human
behaviors, and even if we 'intend' machine behaviors,
it is the coupling of symbols to behaviors that
form the system.
No identity without identification. No meaning
without code. That's the web because that's a
computer. Debate the details as long as necessary.
For those that responded "XML is only a syntax,
why should we care" thread, peace. We all know
XML is only a syntax, but coupling it to behaviors
is what XML systems are about and what the notion
of symbol grounding is about. That is what an
HTML, X3D, SVG, or XSLT document is for. That is
what XML application languages do. The question
is intended to elicit discussions of the utility
of combinations of 'application languages'. Why
and how should we combine these and what combinations
are meaningful? MathML: might fit anywhere.
HTML: fits on any surface. SVG fits on any
surface. X3D: fits in a device context. It
can contain HTML, MathML, SVG in theory, but
practically, only SVG is a like system and
there are object model problems with putting
these together meaningfully except where, again,
the SVG is composited into a surface (say Material
node).
There is a hint here: the meaningfulness
of the combinations can only be determined by
the compatibility of the object models because
as we all know, the meaningfulness of the
syntactic combinations is essentially zero except
by inference (yes, an interpreter can be created
to analyze it like natural language but so what).
Dare: Internet Explorer. See the means for
annotating the presence of VML in an HTML document.
Big surprise. It uses a namespace declaration.
Note also, how to attach htc behaviors using namespace
declarations. It provides a means to discover
that the document asserts the namespace aggregate
is 'meaningful' by declaring it in the root and
associating it to the semantics via the CSS stylesheet.
Note: Linda Grimaldi published a piece of RDF on
this list last week that resolved to a piece of Java.
So clearly namespaces rightly or wrongly, morally or
indefensibly, big endian or little endian, without
regard to the philosophical or legal or sanctioned
efforts of the standards committees ARE BEING USED
TO ATTACH SEMANTICS TO XML TAGS.
Alaric, you mention a global registry. A local registry
suffices for working out when handlers implement
object tags, a sort of SGML-like subdoc approach.
A global registry is like a web service in a sense.
Isn't possible even if highly inefficient to
hook up semantic engines as services?
Folks, when Don Box mentioned Software ICs (an old
term from the Cox books), did anyone think to associate
Software ICs with registered names?
Forward progress on the web as a system, or even
as an operating system, begins with an abstract
object model for the so-called, standard web browser.
This must be a standard browser, and I do mean,
an international standard, not a wiki or simply
an open source code party. Both of those are
desirable but not the means by which the system
is defined.
The DOM isn't good enough. XSLT is just a
transformation language. CSS is pretty good.
RDF... maybe. One needs a way to describe an
abstract object model of the browser that
is mappable to the XML namespaces and by
which, one can easily declare meaningful
combinations.
RSS won't be extensible in and of itself
without something similar. We really
must differentiate XML language design
from XML system design.
len
From: Alaric B Snell [mailto:alaric@alaric-snell.com]
Dare Obasanjo wrote:
> 1.) I can take a vanilla XSLT processor and pass it a stylesheet with
> EXSLT extension elements which my XSLT processor automatically learns
> how to process as valid stylesheet instructions.
>
> AND
>
> 2.) I can take a vanilla W3C XML Schema processor and pass it a schema
> with embedded Schematron assertions which it automatically learns how to
> use to validate an input document in addition to using the W3C XML
> Schema rules.
>
> since these are both "simple" cases of mixing XML vocabularies with
> agreed upon semantics.
>
> As far as I'm concerned this is an unfeasible problem to attempt to
> solve and claiming otherwise is as ludicrous as the claims many were
> making about AI in the 80s and about the Semantic Web in the 90s.
I wouldn't call those unfeasible... hard, maybe, but not impossible.
To solve it takes a few prerequisites:
1) Some way of getting code to run on anything. Perhaps fat binaries.
Perhaps a really minimal bytecode - a stack machine of some description,
maybe - that can be interpreted or compiled. Perhaps java. Whatever.
With a sandboxing mechanism.
2) Standard interfaces for, for example, schema checking systems
independent of the schema language, so one can write interchangeable
modules for XML Schema and Schematron.
3) A global registry mapping namespace URIs to bits of code that
'implement' them.
4) Better definition of the semantics of extension. In XSLT, I imagine
that an XSLT processor might be implemented in terms of a recursive
algorithm that, alternates between a pattern matching mode and a rule
executing mode. In rule execution, it might have a big lookup table of
"xsl:for-each" and friends to decide how to evaluate each part of a
rule. In pattern matching, it might have a big lookup table of
"xsl:template" and... nothing else. So one might generalise that lookup
table into "look up the namespace URI in the global registry, check that
the returned module does indeed implement the 'Transformation'
interface, and then feed it the element name invoked along with the
transformation context and input and details of what to do with the
output etc. etc.".
5) Somebody to write those modules! Presumably this could fall to the
namespace authors - the schema for elements in the namespace and the
standard semantic declaration would go hand in hand.
Note that this isn't *forcing* semantics; it's just *providing default*
semantics. You'd still be free to parse an XSLT stylesheet and use it
to, say, produce a nice diagram of the transformation it embodies, using
your own knowledge of XSLT. The semantic modules might well only define
the semantics of those elements and attributes and extension functions
and whatnot when used for transformations. And you would be free to hard
code in your transformation engine that you know a quicker way to
implement xsl:template using some special hardware or algorithm you have
lying around, and thus avoid using the interpreted bytecode of the
official semantics, but then it's your job to make sure your semantics
matches theirs in all the areas that matter.
A renderer might have a generic layout model for rendering, perhaps the
CSS box model, and it would dispatch based upon namespaces to semantics
modules for each namespace and, as long as they support the rendering
interface, ask them to render themselves. Thus XHTML, Docbook, MathML,
and so on could all coexist happily; Docbook might implement rendering
by just applying some XSLT to itself then chaining to the XHTML
renderer. Stuff like RDF embedded in HTML might not implement the
rendering interface, in which case it would have no effect on the
display - it'd just be ignored. Other problems than rendering might take
a harsher opinion of namespaces for which an implementation of a
relevant interface cannot be found. But maybe XHTML and friends might
declare, in their semantics in the global registry, that they can be
used for 'documentation', in which case document types without explicit
documentation elements might just allow elements from their namespaces
willy-nilly.
|