[
Lists Home |
Date Index |
Thread Index
]
There are no combining characters < U+390 (if my memory serves
me well) or in the Han ideographic area. So most Western and Chinese
data can be scanned passed very efficiently. None of those character
require normalization.
Apart from this, yes you need lookup. Unicode Consortium used to
recommend using a two level table, (e.g. a 256 array of 256 bit entries)
to reduce the amount of space needed, since the positive entries are so
sparse.
Cheers
Rick Jelliffe
----- Original Message -----
From: "Joshua b. Jore/IT/Imation" <jjore@imation.com>
To: <xml-dev@lists.xml.org>
Sent: Saturday, May 03, 2003 6:50 AM
Subject: [xml-dev] Unified rule for detecting (most) BaseChar characters
> Is there some sort of rule for handling BaseChar character detection? The
> W3C recommendation specifies a dictionary of characters to match but its
> large-ish and if there is sort of an 80/20 rule that handles most of the
> cases I'd like to know about it. So... is there one or do BaseChar
> detecters have to always use a dictionary?
>
> Joshua b. Jore
> Domino Developer
> Imation Corporation
|