[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
RE: Enlightenment via avoiding the T-word
- From: "Fuchs, Matthew" <email@example.com>
- To: 'Rick Jelliffe' <firstname.lastname@example.org>,"'email@example.com'" <firstname.lastname@example.org>
- Date: Thu, 30 Aug 2001 11:19:41 -0700
Ah yes - the flat XSL architecture, inherited from the flat DSSSL
architecture which clearly didn't understand anything of the value of Scheme
(the programming language - not Schema) - which I pointed out (without
success, obviously) in my SGML'96 paper. Perhaps the lack of understanding
of the value of nesting is at the root of this whole business. A more
explicit explanation below.
> -----Original Message-----
> From: Rick Jelliffe [mailto:email@example.com]
> Sent: Wednesday, August 29, 2001 2:56 AM
> To: firstname.lastname@example.org
> Subject: Re: Enlightenment via avoiding the T-word
> ----- Original Message -----
> From: "Nicolas LEHUEN" <email@example.com>
> To: "'Rick Jelliffe'" <firstname.lastname@example.org>; <email@example.com>
> Sent: Wednesday, August 29, 2001 5:44 PM
> Subject: RE: Enlightenment via avoiding the T-word
> > Rick wrote :
> > >Why is it more efficient to make the receiver of your tables
> > >disambiguate the
> > >names (by using a PSVI or XPath) than doing it when the data
> > >is serialized?
> > >
> > >It is nice that every table is a separate namespace. But why
> > >is there any need
> > >to complicate XML with all this extra levels of processing to
> > >support that?
> > There is no extra level of processing. As I wrote before,
> in the first case
> > you have XPath expressions like /stuff/person/name/text(),
> and in the second
> > case you have /stuff/person/person-name/text().
> No, you have
> <xsl:template match="x:person-name">
> rather than
> <xsl:template match="/x:stuff/x:person/x:name">
> or, more likely, you have
> <xsl:template match="x:name">
> <xsl:if select="parent::x:person">
> <!-- oops I need this extra test because name is reused--
> those darned people at x:: namespace keep on adding
> new local elements and our code is written to just use
> the markup. Why cannot they just use vanilla XML.... -->
Why would anyone write such ugly, unextendable code when you can write
"match=x:person/x:name"? With the "if" statement, you need to go into the
body of existing code every time some new elements are added and insert
pieces all over the place. With the two level match, you just append
templates to the end. Multilevel matching handles locally scoped names
quite handily. The only problem is that if there's also a global element of
the same name, it's one level rule (i.e., x:name rather than
x:person/x:name) will catch the locally scoped names by default - if they're
in the schema namespace. If they're unqualified, and the rule to catch the
global element has a prefix, then it won't match the local names, once again
demonstrating why local names should be unqualified.
However, why couldn't XSL have taken advantage of basic principles of
programming language design? The way to handle locality is, again, by
scoping. Even if "name" is a global, if I wish to process it differently
when it shows up in "person" from when it shows up inside "product", plus
have a rule for default processing, one might write the code shown below (if
XSL allowed it):
...information generated specific to "x:person" and not visible outside the
...name processing specific to person,
could recursively call generic name template
can access local information created by surrounding template
...name processing specific to product...
This kind of thing is even more interesting with locally scoped names which
should only appear in the context of a surrounding element. I've built this
kind of transformation engine more than once, and find it much easier to use
than XSL's all-global templates.
> > When processing a document, you can hardcode the processing
> path followed by
> > your program : "I first process the person, then the
> product, then the
> > delivery info".
> Push programming and pull programming are both common methods
> with XML.
> Pull programming does as you say. You process an element and
> explicitly pull in specific child data as needed.
> But in push programming you code each element separately:
> elements are processed
> individually and pass information back to its parent or
> caller (or stream). You use your
> expectation that the document has been validated as a precondition
> for working on the document, which factors out the need for
> lots of tests. But if people re-use names for different types,
> push programs become fragile. A push program can be robust
> in the face of new names, but if you re-use existing names then
> you strongly couple the document to its specific schema and
> to the generating process. And loose coupling of distributed
> processing is one of the big reasons for XML.
As I explain above, a "push program" based on valid XSDL with local names is
not vulnerable if match statements to match global elements are qualified,
and match statements to match local names are not and include the parent in
the match path, is not fragile in the way you describe. It is only fragile
if the local names are put in the schema namespace.