[
Lists Home |
Date Index |
Thread Index
]
Gavin, good feedback, thanks.
> The point I was making is that for the XHTML summarizer (assuming this
> means something that is able to meaningfully summarize the content of
> itself) to do it's job, it needs to correlate the summarization of
> it's constituent/embedded parts... or there is a real risk of them
> being out of sync. Many kinds of processing need this.... there has to
> be a coordination framework there... a well-known set of interfaces
> almost.
Right. But I'm only interested in the serialized structure, not
anything to do with activation and binding with that structure (Bento
vs. Opendoc).
> The same thing applies to XML processing. In the context of a given
> processing application (such as layout), even switching on the
> namespace may not help. What happens if no component that implements
> that processing interface can be found?
That depends. If any subdocument acts as a container, and the element
of that subdocument which does the containing is required to be
processed (e.g. smil:skip-content="false"), then if a suitable
compound processor cannot be constructed, processing must fail.
As there's no mandatory extension mechanism for XML (like SMIL's
skip-content), a generic processor has to assume that each subdocument
must be processed. This isn't so bad, but for larger documents it
could mean a delay for any user who'd be using it.
> These kind of questions are at
> the heart of processing, and lead to defining an application, because
> not all applications have the same set of behaviours, *especially* in
> the face of error.
>
> Defining a dispatch model is very good. A useful step, but it ignores
> the problem of *building the application* and *packaging the
> application*... and claims that "we'll just download the code" don't
> really work, because you *still* can miss components.... or you open
> yourself up to the well-known Trojan horse problem with embedded
> downloadable code.
It doesn't "ignore" those things. It says those are orthogonal
issues; the "what" versus the "how".
> On this, I think Rick is right on the money. One of the biggest
> problems with XML is *not* namespaces, the processing model, or
> <whatever>. It is *deployment*. How do we get the XML content, and all
> the bits necessary for processing it *in an application context* to
> the user? RDDL etc. need to be evaluated in that perspective, I
> believe... or at least in the context of an application.
>
> I would go one step beyond packaging, and claim that what we really
> need is a way to deploy *applications of XML* in a secure manner. For
> people developing {information} products, that is the only way they
> can be sure people consume the data as they intended them to. For
> example, I would like some way of saying, "Mr. Baker may read my
> documents but he cannot view the source"... or I might package my data
> saying that "this is available for online display, display via a
> braille reader, as a spoken document, and as a pdf file. The source is
> not available.".
You mean like MS Word? 8-)
I understand what you mean, but I'm not convinced of the value of it.
Anyhow, that's a separate topic.
> The thing here is that in most cases, I, as the producer, do not care
> about embedding *except* as it pertains to my application. In the
> scope of an application, with well-known dispatch semantics, and a
> well-known set of components (for example, XML EDI
> processors/gateways), you probably *can* dispatch on embedded content
> reliably using little more than an element lookup table... but you're
> still left with packaging.
For packaging, I'm assuming a compound document with containment purely
by value. No XInclude ickiness here (sorry, Dave 8-).
MB
--
Mark Baker, Chief Science Officer, Planetfred, Inc.
Ottawa, Ontario, CANADA. mbaker@planetfred.com
http://www.markbaker.ca http://www.planetfred.com
|