Which is why the Microsoft solution of attaching event handlers via the style sheet and the namespace works so well. It completes the declaration of the control type by the event it receives and declares the handler (function) to fire on receiving the event. In other words, it recognizes that:
1. Link rendering is style rendering in terms of the objects/conrols the container renders. Any rendered object that receives events is by definition, a control. An association, on the other hand, such as “an author is a person” is just an assertion. It need not be rendered; it just has to be stored.
2. Associating a semantic to an event received is a late binding processs
Hypertext pioneers recognized long before HTML that links receiving events are post-rendering controls. This is the issue that has been bedeviling URI/URLs since they were first adopted. It doesn’t break anything but it confuses some and makes it necessary to either attach semantics by a late-bound processing approach or to fix them in a specification, and of course, ‘slightly different meanings’ foul up interoperability.
BTW: Linkbases don’t take off per se because it is easier to do that in a database. Declaring one in XML is no challenge at all. We’ve been doing from/to links in VRML ROUTEs and now X3D ROUTEs from day one. Of course, those standards have an accompanying object model in the standard that ensures differences of meaning for these don’t occur. They aren’t renderable controls; they route real time event types.
The semantic problem is an object model problem. In fact, the linking problems go away if one approaches them from an object model. That is old news everywhere.
len
That’s right. The process is the harder problem and can’t (shouldn’t?) be addressed by a linking spec alone. I’m not sure how I feel about whether XLink associations are over-specified or incompletely specified. Maybe the mix is about right, but from the ‘Camp 2’ point of view the links, in any form, end up a black box. (which comes back to process I guess). Eric’s comment re: XTM seemed about right to me when he wrote:- <quote> Using XLink to simulate extended links with a bunch of simple links looks to me like saying: "OK; we'll take the syntax so that we can say we are compliant but we'll attach a slightly different meaning". If I am right, XTM can hardly pretend to be using XLink :) ... OTH, if I am wrong, the XLink recommendation should better have defined simple links only and explained how extended links can be built from simple links. </quote>
But it’s hard to think of how to use xlink associations without attaching that “slightly different meaning”.
Nick.
-----Original Message-----
Yes it does. Are you saying that associating semantics with the XLink markup are:
O incompletely associated/specified (not enough data)? O over specified (too much data that you have to ignore)? O not precise enough about the semantic/process associated (the problem is not the markup specification but the process specification)?
You are right that the whole point of indirect association is to specify a process. Typically when a markup language becomes controversial, it is not because of the markup (trivial to model that) but because of the specification for the object that consumes it. That is one reason for perma threads in XML: debating syntax and data declaration instead of object methods where the real problems of specification are harder and Not XML anyway.
len
GML (Geography Markup Language) also relies on XLink for semantic association and represents a growing community, riding a gradual uptake of OGC WFS services. With metadata standards rapidly maturing in this domain, the GML community is coming to a point where enterprise support for GML will require custom XLink models/processors. Previous experiences with XLink have left me thinking that the effort/reward ratio is far too low. I’m interested by the direction of this thread though.
Nick Ardlie |