OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

 


 

   RE: [xml-dev] A standard approach to glueing together reusable X ML fra

[ Lists Home | Date Index | Thread Index ]

You have put your finger on the problem of the semantic web:  cost. 
No one has ever, AFAIK, described it in terms of cost savings.  There 
is likely a case for that, but it would be future savings.  We had 
the same problems in CALS.  Even the web was a hard sell initially, 
but it came with 'a movement' and true believers don't study 
cost estimates and project savings.  They just *go for it*.

My intent was to say there are markup opportunities in the patent 
content even if they are limited in the patent document structures. 
Also, given the enormous and growing problems of intellectual property 
in our industry, one would think authorities would be actively 
exploring technologies to address those problems.  To hear that they 
are going in precisely the opposite direction is disturbing and 
a potential political problem both nationally and internationally. 

The problem of course, nationally, is to create technology that 
correctly and provably implements policy.  The problem internationally 
is to harmonize policies (think Berne convention) that can be 
implemented in interoperable systems.   Classification systems 
would be a sine qua non of such systems and a very important 
application of the semantic web.  Wouldn't an automated 
classification and indexing system that reached down to 
the essential claims speed up the examiner process as well 
as result in an auditable trail of claims references?  It 
would seem to make the prior art evaluation much simpler 
and more reliable as well as reduce the costs and probability 
of patent litigation.  Patent searches would certainly be 
more reliable.  It could also enable some exotic means of 
creating specifications.

Who pays?  The usual victim: taxpayers.  The question is, 
for what quality of results?

len


From: Bruce.Cox@USPTO.GOV [mailto:Bruce.Cox@USPTO.GOV]

I'm not entirely sure what you mean by "mapping" or "mappable entities" but
I can say that examiners, as I understand it, among other things, search for
patents that anticipate the claims of the application in hand.  If they find
none (and this is my take on what they do), then the application in hand is
presumed original.  Finding prior patents that are relevant is done by text
searching and by use of the US Patent Classification (see www.uspto.gov for
details).  Examiners themselves determine the classifications of a patent at
the time it is ready for publishing.  Applying any kind of markup to the
content of the specification or claim would require someone who would
understand the technology well enough to apply the appropriate categories,
so this is not too different from applying patent classifications, except
that it would require considerably more time and validation (accuracy
counts).  As far as I can tell, nobody, and I do mean NOBODY, at the USPTO
would be willing to pay for that, no matter how valuable it might turn out
to be, so again, there is no scaling issue.  In fact, there has been
considerable effort to reduce the cost of or even eliminate the US Patent
Classification, as alarming as that might be.

Semantic technologies in general suffer from this defect: they are terribly
expensive to implement on any useful scale since they require that someone
(a live human with intelligence, knowledge, and experience) apply the markup
that makes the web "semantic" (I'm beginning to hate that word).  Who pays?




 

News | XML in Industry | Calendar | XML Registry
Marketplace | Resources | MyXML.org | Sponsors | Privacy Statement

Copyright 2001 XML.org. This site is hosted by OASIS