OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [xml-dev] Adam Bosworth on XML and W3C



I think both are relevant - schema is consumed by both carbon and silicon
agents.  For two markup related examples - ambiguous content models and
Murata Makoto's work with hedge automata.  There seems to be a human
(carbon-based agent) predilection for ambiguous content models because they
can be easier to write, but they can cause problems for the behavior of
programs (silicon-based agents).  Likewise, Makoto's work is very elegant,
and implementation may not be so hard (low Kolmogorov complexity), but there
are cases requiring exponential processing time, which is why I was against
using them directly in Schema (when I heard "exponential" I thought
"denial-of-service attack").
 
It would be awesome if there were some way to relate the formal complexity
measures with psychological complexity.  Do you know of any sources?
 
Matthew

-----Original Message-----
From: Bullard, Claude L (Len) [mailto:clbullar@ingr.com]
Sent: Tuesday, October 09, 2001 11:00 AM
To: Fuchs, Matthew; xml-dev@lists.xml.org
Subject: RE: [xml-dev] Adam Bosworth on XML and W3C


So cognitive loading or psychological complexity is the measure of choice
and it depends on a view so is suspect 
unless the view is generalizable as well.
 
I like the n-dimensional approach.  Nice fat strange attractors....
 
len

-----Original Message-----
From: Fuchs, Matthew [mailto:matthew.fuchs@commerceone.com]
Sent: Tuesday, October 09, 2001 12:53 PM
To: xml-dev@lists.xml.org
Subject: RE: [xml-dev] Adam Bosworth on XML and W3C



I can see two ways to look at this. 

If you consider each specification as describing a language - the language
of all correct schemas in the first case, and the language of all correct
HTML4 pages in the latter - then one could apply algorithmic information
theory (developed by Chaitin and Kolmogorov in the '60's) and state that the
complexity of a specification is the length of the minimum program required
to verify that a statement (a schema in the first case, an HTML4 page in the
latter) is in the language described by the specification.  From that
perspective, HTML4 probably is simpler than Schema, given all the weird edge
cases in Schema.

But that wouldn't really be fair - validating an HTML4 page isn't a very
useful task.  On the schema side, a better measure might be the minimum size
of a program that, given a Schema and an instance, generates the PSVI.  On
the HTML4 side, a better measure would be the minimum size of a program
that, given an HTML4 page, displays the page in a browser.  This probably
skews the results back towards making Schema simpler.  On the other hand,
judging HTML4 depends so much on the UI it is implemented on that it makes
judging the complexity of an implementation very difficult.

But then one might argue that this is not the relevant measure of complexity
- we're not looking for the minimum size across all programs, but we're
looking for a very different measure of complexity - the number of synaptic
changes in a human brain required to achieve competence in one or the other.