Lists Home |
Date Index |
> You are making the assumption that in most cases the XSLT processor
> has control over the parsing (and/or serialization).
Not at all. I am making the assumption that in *some* cases the XSLT
processor has control over the parsing (and/or serialization), and
that this is a common enough that it deserves some standardisation.
> I am saying that this is an inappropriate assumption and flies in
> the face of any pipeline-processing model, such as found in Cocoon
> or any application that uses an API such as JAXP.
Absolutely. That's the whole point of my trying to make the
distinction between the core role of an XSLT processor, which is
transforming from node tree to node tree, and the wider role of an
XSLT application, which is transforming from document to document.
> Adding more *optional* features will not help interoperability and
> will do nothing to *ensure* that a source tree is treated as nothing
> more than plain vanilla elements, attributes, and text.
Right -- it will only help interoperability as much as xsl:output
does. I'm assuming two conformance levels:
- basic conformance governing tree-to-tree transformations
- parsing/serializing conformance governing document-to-document
xsl:input, xsl:output and disable-output-escaping would be part of the
second conformance level.
> XSLT's data-model-centric view could only succeed in practice if
> there was never any doubt about the corresponding serialization. The
> XSLT/XPath 1.0 data model has an unambiguous mapping to its
> serialized form. In particular, the inherent constraints within the
> data model prevent a tree from ever being constructed that does not
> correspond to a real XML document (or EGPE). This is even more than
> can be said of the Infoset (because of its redundancy in modeling
> namespaces). The XPath 1.0 data model can always be round-tripped
> between abstract node tree and serialization; that is the point I'm
> trying to make. The same cannot be said of the PSVI (or of the
> current XPath 2.0 data model).
> XSLT 2.0 has pretty much committed to providing support for PSVI
> pipelines. Unless there is a mode that restricts input to vanilla
> XML (for lack of a better term), the mere presence of PSVIs will
> destroy the possibility of robust vanilla XML pipelines.
I agree absolutely that serializing an XPath 2.0 node tree is a very
different task from serializing an XPath 1.0 node tree. I'm still
unclear about what this has to do with whether or not authors should
have control over how a document is interpreted to construct a node
Is it that if you override the schema-locating information in a source
document, then that means that you can't guarantee that the node tree
used as the source in a stylesheet is the same as the serialized node
tree used as the result of another stylesheet? Is that what you're
And is it that you think that's important because you want pipelines
to be able to work on serialized documents, as well as in-memory
If so, I think I'm starting to understand, though there are still some
areas where I need clarification:
- Won't most/all pipelining be done on in-memory structures?
- What is it that currently stops a stylesheet author from ignoring
the augmentations to the data model if they choose to?
- What is it that currently stops a stylesheet author from generating
The issue of how to serialize the XPath 2.0 node tree (via the PSVI,
presumably) is a nightmare, or would be if there were a way to assert
the type/typed-value of elements/attributes when they were created. I
see this isn't something that XSLT 2.0 can do in the current draft
(four related issues pending). Personally, I think that it's something
to avoid like the plague for this round -- users should be able to add
XMLSchema-instance attributes to the result, but we shouldn't have
XSLT applications generating schemas on the basis of node trees.