OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

 


 

   Re: [xml-dev] Using The Principle of Least Power As A Razor

[ Lists Home | Date Index | Thread Index ]

>
> I'm not.  The TAG debated this and their are drafts
> for it, but no one seems to be able to clarify it
> past discussions of Turing completeness, reuse of
> data, etc.
A propos the principle of least power someone once argued to me that
XML Schema was preferable to Schematron because of this principle.
They were worried that Schematron could in fact be Turing complete and
thus susceptiblle to the halting problem (however they admitted they
did not know Schematron well enough to base their worries on the
language itself, just that they'd heard it was more powerfull)

I sort of wonder if the ISO version would not be Turing complete if
one were to allow the use of XPath 2.0 (basing worry on let, extends,
and include elements)

From the Least Power document:

"Thus, there is a tradeoff in choosing between languages that can
solve a broad range of problems and languages in which programs and
data are easily analyzed. "

I sort of wonder if a lisp/scheme user wouldn't argue this point.

Probably from the same view as noted later "For example, a language
with a straightforward syntax may be easier to analyze than an
otherwise equivalent one with more complex structure. "

That something can be analysed may be more relevant to its structure
than to how powerful it is.



>
> >here's my simple take on this interesting probem:
>
> >1. you need to distinguish between power and typing - does language a
> >require more or less typing? in general (flame me if you like) almost
> >all languages derive their "power" from a decrease in the amount of
> >typing to get the same result.
>
> That is somewhat close to what is said on the TAG list.  I ask, if a
> language implementation is silently casting, is that more or less
> powerful?

IF "Less powerful languages are usually easier to secure. "
and silent casting is more powerful than it follows that the document
agrees with the explicit strong typing side of the typing argument.
But I don't think it actually agrees with that.

 I think it means that less powerful languages are usually easier to
secure, but that it does not necessarily mean that power is a synonym
for difficulty of securing.

That is to say there are probably other things that can affect a
language's ability to be secure than power (hence the usually).
Perhaps one of these things may involve typing (I'm thinking this part
is made intentionally vague)


> Berners-Lee seems to be focused on reuse aspects.  Powerful
> languages that require a lot of say, object technology, just to
> express data are more powerful but not as good for the web user
> because the data can't be reused easily if at all.  I get that
> but is that all there is to it?

I don't know if reuse is the word, and not analysis. The idea seems to
be that if you have a table in html then another program, a web spider
for example can analyse what that table is about.
This principle spread to other languages, for example he mentions XSLT
and the using templates, means that it can be easy to create
analytical tools for the data represented by a stylesheet if the
stylesheet is formed in a manner making this interpretation easy.

Let us suppose we had a crawler looking for transforms, there are
certain things we could analyse reasonably well:

1. What namespaces the XSLT uses.
2. What is the output from the transform, based on xsl:output, or a
template matching / has an output or otherwise leads to a template
that has an output. I think quite a number of stylesheets are explicit
in this manner. Thus we analyse that <xsl:template
match="/"><html><xsl:apply-templates/></html></xsl:template> outputs
html
3. The transform matches element x in the y namespace t/f?
4. The transform matches element x in the y namespace and  asserts it
should be presented as this html element (given that most transforms
will be to html)

3 or 4 would of course be setting more weight on transforms written in
a way that our crawler will understand, probably simple transforms
like
<xsl:template match="x:para"><p><xsl:apply-templates/></p></xsl:template>
because anything else would be difficult to analyse.

But of course this comes back to the statement that less powerful
languages are easier to analyse and I'm not sure that there is a
direct correllation between these things. In the example I made above
it is not less powerful use of XSLT that is easier to analyse it is
more structured/uniform use of XSLT that is easier to analyse.

On the other hand I could, using the document function, construct an
xslt that would be totally unanalysable. And I submit that by using
the document function we would be also increasing the 'power' of the
transform, and decreasing its security at the same time.

Cheers,
Bryan Rasmussen




 

News | XML in Industry | Calendar | XML Registry
Marketplace | Resources | MyXML.org | Sponsors | Privacy Statement

Copyright 2001 XML.org. This site is hosted by OASIS