[Date Prev]
| [Thread Prev]
| [Thread Next]
| [Date Next]
--
[Date Index]
| [Thread Index]
RE: [xml-dev] Validating Bulk XML Data
- From: noah_mendelsohn@us.ibm.com
- To: "Michael Kay" <mike@saxonica.com>
- Date: Wed, 15 Aug 2007 10:09:31 -0400
Michael Kay writes:
> I can't see any intrinsic reason why validating 1000 small documents
should
> take longer than validating one document formed by concatenating the
> content, provided the schema itself is only prepared once.
Yes, in principle. If the particular validation tool you use happens to
have a long startup or setup time, then validating many small documents
would of course be slower. I noticed in your earlier email you suggested
a good approach in which Java APIs would be used to prepare a schema once,
then apply that schema repeatedly. That would presumably be fast. If
one used repeated calls to a command line parser interface, perhaps froma
script or batch file, and if that interface did not support precompilation
or prepartion of the schema, then 1000 small documents would probably be
slow. I suspect that's what you meant, but in this case I think it's not
clear what tools are being used.
Noah
--------------------------------------
Noah Mendelsohn
IBM Corporation
One Rogers Street
Cambridge, MA 02142
1-617-693-4036
--------------------------------------
[Date Prev]
| [Thread Prev]
| [Thread Next]
| [Date Next]
--
[Date Index]
| [Thread Index]