OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

 


 

   Re: [xml-dev] hashing

[ Lists Home | Date Index | Thread Index ]

Eric Hanson wrote:

> I have a large collection of XML documents, and want to find and
> group any duplicates.  The obvious but slow way of doing this is
> to just compare them all to each other.  Is there a better
> approach?

You figured it out in the next paragraph.

> Particularly, is there any APIs or standards for "hashing" a
> document so that duplicates could be identified in a similar way
> to what you'd do with a hash table?

The quick-and-dirty approach would be to run all of them through an XML 
normalizer (such as any SAX parser hooked up to XML-Writer).  You could add 
some SAX filters in-between the parser and the writer to tweak the 
normalization for any business rules, such as ignoring certain attributes, 
supplying default values, case insensitivity, numerical precision, etc.  If 
you don't want to play with SAX, you can do something similar with an XSLT 
transformation.

Next, create a list of all the normalized copies, sorted by file size or 
hash, and run Unix cmp or the Windows equivalent on any files with the same 
byte size or hash code.

If this isn't a one-off, then it would not be too hard to write your own 
hash generator based on SAX events, again, applying local business rules as 
appropriate.


All the best,


David

  • References:
    • hashing
      • From: Eric Hanson <eric@aquameta.com>



 

News | XML in Industry | Calendar | XML Registry
Marketplace | Resources | MyXML.org | Sponsors | Privacy Statement

Copyright 2001 XML.org. This site is hosted by OASIS