[
Lists Home |
Date Index |
Thread Index
]
The European Patent Office, the Japan Patent Office, and the World
Intellectual Property Organization are working with the USPTO to explore
means of reducing redundant efforts -- for example, examine once, assign
classifications once, rather than every office doing this over and over when
they each get the same application (as they do, tens of thousands of times a
year). See http://pcteasy.wipo.int/efiling_standards/EFPage.htm for the
mutually agreed markup for patent application structures, one of the first
major steps in this direction. The three major offices will all be using
this markup for electronic applications and for publications within the next
few years (USPTO by 2004 January).
It appears now that a thoroughly revised International Patent Classification
(IPC) will incorporate many of the distinctive features of the USPC
(frequent updates, scope notes, subclass definitions). You can see that
this could create further opportunity for reducing redundant work among the
industrial property offices without abandoning the value of a classification
scheme for patents.
Patent applications are sorted on arrival for workload distribution.
Clerks, using text searching and their knowledge of the US Patent
Classification, assign a classification that will get the application to the
right art unit for further processing. Attempts were made to automate this,
but automatic classification that improved on what the humans did (~70%
success rate, as I recall) was too expensive, I'm told. For this purpose,
70% is good enough. For the final classifications provided by examiners
after they have studied the application, it probably isn't possible to
achieve good-enough results through automation, or if it is, it would
probably again cost too much. I'm not aware of any current projects in this
direction among any of the industrial property offices.
Claim markup is designed to support claim steps (hierarchy), and there are
tags for claim dependency. There are some interesting ideas about how to
exploit this. It could help in managing examiners' performance plans and
setting fee structures, but no one has suggested how it could be exploited
to improve searching. It is, after all, merely logical structure markup,
revealing nothing about the technology claimed.
But don't loose hope. Just having XML markup is a major improvement over
the previous markup that was specific to a piece of 1970's photocomposition
hardware used by the printer. Now, at least logical structure and rendering
are separated. As the PTO and patent information value-added resellers
learn how to exploit the XML, demand for more advanced markup will likely
increase. There is a great deal of inertia to overcome not just in the
industrial property offices, but throughout the industrial property
community, from filers right through consumers of the final information
byproducts. Management here fully supports XML on the basis of its
potential benefits, but there is little to show for what markup we have
introduced so far. When management has some real experience with actual
tangible benefits, then I expect things will accelerate. Could take five
years or more to get there, in my opinion.
One final comment before letting this thread get back on point. The Patent
Law Treaty appears to go a long way toward harmonizing policy among patent
offices, and, although not yet implemented, seems to be already driving some
technical issues here.
Bruce B. Cox
SA4XMLT
USPTO/OCIO/AETS
703-306-2606
-----Original Message-----
From: clbullar@ingr.com [mailto:clbullar@ingr.com]
Sent: Thursday, August 21, 2003 10:51 AM
To: Cox, Bruce; xml-dev@lists.xml.org
Subject: RE: [xml-dev] A standard approach to glueing together reusable X ML
fragments in prose?
You have put your finger on the problem of the semantic web: cost.
No one has ever, AFAIK, described it in terms of cost savings. There is
likely a case for that, but it would be future savings. We had the same
problems in CALS. Even the web was a hard sell initially, but it came with
'a movement' and true believers don't study cost estimates and project
savings. They just *go for it*.
My intent was to say there are markup opportunities in the patent content
even if they are limited in the patent document structures.
Also, given the enormous and growing problems of intellectual property in
our industry, one would think authorities would be actively exploring
technologies to address those problems. To hear that they are going in
precisely the opposite direction is disturbing and a potential political
problem both nationally and internationally.
The problem of course, nationally, is to create technology that correctly
and provably implements policy. The problem internationally is to harmonize
policies (think Berne convention) that can be
implemented in interoperable systems. Classification systems
would be a sine qua non of such systems and a very important application of
the semantic web. Wouldn't an automated classification and indexing system
that reached down to the essential claims speed up the examiner process as
well as result in an auditable trail of claims references? It would seem to
make the prior art evaluation much simpler and more reliable as well as
reduce the costs and probability of patent litigation. Patent searches
would certainly be more reliable. It could also enable some exotic means of
creating specifications.
Who pays? The usual victim: taxpayers. The question is, for what quality
of results?
len
From: Bruce.Cox@USPTO.GOV [mailto:Bruce.Cox@USPTO.GOV]
I'm not entirely sure what you mean by "mapping" or "mappable entities" but
I can say that examiners, as I understand it, among other things, search for
patents that anticipate the claims of the application in hand. If they find
none (and this is my take on what they do), then the application in hand is
presumed original. Finding prior patents that are relevant is done by text
searching and by use of the US Patent Classification (see www.uspto.gov for
details). Examiners themselves determine the classifications of a patent at
the time it is ready for publishing. Applying any kind of markup to the
content of the specification or claim would require someone who would
understand the technology well enough to apply the appropriate categories,
so this is not too different from applying patent classifications, except
that it would require considerably more time and validation (accuracy
counts). As far as I can tell, nobody, and I do mean NOBODY, at the USPTO
would be willing to pay for that, no matter how valuable it might turn out
to be, so again, there is no scaling issue. In fact, there has been
considerable effort to reduce the cost of or even eliminate the US Patent
Classification, as alarming as that might be.
Semantic technologies in general suffer from this defect: they are terribly
expensive to implement on any useful scale since they require that someone
(a live human with intelligence, knowledge, and experience) apply the markup
that makes the web "semantic" (I'm beginning to hate that word). Who pays?
|