OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

 


 

   RE: [xml-dev] Symbol Grounding and Running Code: Is XML Really E xtensi

[ Lists Home | Date Index | Thread Index ]

Unless the USPTO understands why semantics are a 
big deal, and simple symbol processing is 
insufficient to warrant interoperable systems, 
I sincerely hope the USPTO stops granting business 
patents or software patents in general because 
it is warranting their claims without understanding 
them.

Super registries are not the answer.  Federated 
registries that use namespaces to denote standard 
and royalty free technology can be part of a solution. 
Not perfect, but better.

len

From: Bruce.Cox@USPTO.GOV [mailto:Bruce.Cox@USPTO.GOV]

Mr. Snell proposes a solution to some tough problems that Mr. Obasanjo
thinks, if they are solved, would provide sufficient grounds for
successfully mixing arbitrary namespaces (that is, provide the requisite
semantics), and Mr. Bullard points out that namespaces are being used to
attach semantics to XML.  

Having read most of this thread now, I'm convinced that terms like
"semantics" are being driven to different meanings than they can readily
support.  Just as "artificial intelligence" is an oxymoron that raises
expectations beyond the possibility of fulfilling, so do "web ontology
language" and "semantic web".  These philosophical terms, appropriated for
use outside their rightful context, are confusing what should be a
relatively simple issue, it seems to me.

Computers process symbols.  Input is rearranged into output that is
convenient for us or for other machines to process further.  Modern society
is replete with the value that this brings.

Reading Mr. Snell's outline of a solution looks to me like another layer (or
more) of machinery that can accommodate arbitrary namespaces provided there
is some super registry and other machinery to resolve ... .  Well, the point
is, the symbol processing machine gets bigger and more complex.  Attaching
various machine behaviors to various objects recognized by whatever means as
belonging to the appropriate class for that behavior, is nothing more than
what computers have always done, that is, process symbols.  PhD in semantics
not required.  If there really were meanings, which usually require
interpretation, the processing would not be mechanical, the results would
not be worth paying for, and we'd have long since trashed such machines as
unreliable junk.

The expansion of the web machinery to solve problems such as the one
occasioned by RSS appears to lead to a (possibly much) more complex machine
than was anticipated when namespaces were introduced.  Do we need to "ground
the symbols"?  As others have pointed out, no.  Besides, that happens only
when a person looks at the symbols and understands them.  Where the machine
is too large and complex for any one of us to understand, it takes a
community, or an institution to understand it.  Machines, no matter how
complex, cannot do this, nor do they need to.  That's our job, thank you.

I think the bigger question is, how do we pay for it?  Building a web
machine that can perform this kind of processing requires standards ever
more cosmic in scope.  Do we have the necessary experience and the vision to
see that large a picture?  Can we and our current institutions support it or
not?  Will the market embrace it, distort it, or ignore it?  Is the benefit
worth the effort?  Or can we afford to live on the edge of wilderness for a
while longer, taming it in smaller bytes?

--with my apologies if I've misrepresented anyone's comments.




 

News | XML in Industry | Calendar | XML Registry
Marketplace | Resources | MyXML.org | Sponsors | Privacy Statement

Copyright 2001 XML.org. This site is hosted by OASIS