OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [xml-dev] Adam Bosworth on XML and W3C



Title: RE: [xml-dev] Adam Bosworth on XML and W3C
So cognitive loading or psychological complexity is the measure of choice and it depends on a view so is suspect
unless the view is generalizable as well.
 
I like the n-dimensional approach.  Nice fat strange attractors....
 
len
-----Original Message-----
From: Fuchs, Matthew [mailto:matthew.fuchs@commerceone.com]
Sent: Tuesday, October 09, 2001 12:53 PM
To: xml-dev@lists.xml.org
Subject: RE: [xml-dev] Adam Bosworth on XML and W3C

I can see two ways to look at this.

If you consider each specification as describing a language - the language of all correct schemas in the first case, and the language of all correct HTML4 pages in the latter - then one could apply algorithmic information theory (developed by Chaitin and Kolmogorov in the '60's) and state that the complexity of a specification is the length of the minimum program required to verify that a statement (a schema in the first case, an HTML4 page in the latter) is in the language described by the specification.  From that perspective, HTML4 probably is simpler than Schema, given all the weird edge cases in Schema.

But that wouldn't really be fair - validating an HTML4 page isn't a very useful task.  On the schema side, a better measure might be the minimum size of a program that, given a Schema and an instance, generates the PSVI.  On the HTML4 side, a better measure would be the minimum size of a program that, given an HTML4 page, displays the page in a browser.  This probably skews the results back towards making Schema simpler.  On the other hand, judging HTML4 depends so much on the UI it is implemented on that it makes judging the complexity of an implementation very difficult.

But then one might argue that this is not the relevant measure of complexity - we're not looking for the minimum size across all programs, but we're looking for a very different measure of complexity - the number of synaptic changes in a human brain required to achieve competence in one or the other.