[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: ASN.1 and XML
- From: Joel Rees <rees@server.mediafusion.co.jp>
- To: Rick Jelliffe <ricko@allette.com.au>
- Date: Tue, 29 May 2001 23:04:38 +0900
Hi Rick,
Iwaku Rick Jelliffe:
> In some countries, the technological elite uses English and is happy with
> the status quo. In other countries, even the technological elite is not
> comfortable in English.
>
> I would hope that Japan and other ISO members would start to adopt a
policy
> of diverting all International Standards to Technical Report status if
they
> are gratuitously tied to ASCII rather than to Unicode. It is shameful.
I have extremely mixed feelings on this subject.
As a programmer here in Japan, I understand first hand how hard it is to get
engineers who think they are comfortable programming in "English" to realize
the advantage of having meaningful identifiers in their source. XML is
helping, but a lot of our xml documents specify shift-jis as the encoding.
They want a good reason to do the conversion, and UNICODE tends to be seen
as just being another, slightly better, more complete JIS. If the Japanese
are going to have to convert, I think they instinctively want to go a step
further. I do.
JIS was a hack. (And a very good hack!) Encoding by whole Kanji is roughly
the equivalent of encoding English by root word rather than letter. Think
what it means to specify in advance which root words are allowed: No more
"warez", etc. ;-) If your name happens to have a non-standard spelling, you
have to use the standard spelling on all computerized documents. UNICODE
does not change this, although, with the additions this year, it brings
<estimate>99+%</estimate> of the known unusual writings into the standard.
(They may have essentially all the unusual writings that are _officially_
known.)
(And so that I am not misunderstood, they have done a lot of good work on
UNICODE, and I appreciate that very much.)
The point I am trying to push is that users of ideographic characters want
to be able to create characters on the fly, and to be able to transmit such
characters reliably. (By reliably, I mostly mean that the intended character
should display legibly on the recipient's machine; I am not talking about
calligraphy.) We now have processor speeds, memory, and displays sufficient
to allow encoding ideographs by radical component.
I suspect that there is also need to analyze text based on radicals, but
that can be done (and will ultimately have to be done) in part with external
tables mapping ideographs to and from lists of their components.
(UNICODE may have most of the pieces necessary to do all of this. I am
trying to break out some time to play with real code, but there's too much
about creating and registering glyphs that I don't know yet.)
Anyway, national standards remaining tied uniquely to the old standards is
not good, but I personally perceive no end to the need for supporting
multiple encodings.
Joel Rees
============================XML as Best Solution===
Joel Rees リース ジョエル
Media Fusion Co.,Ltd. 株式会社メディアフュージョン
Amagasaki TEL 81-6-6415-2560 FAX 81-6-6415-2556
Tokyo TEL 81-3-3516-2566 FAX 81-3-3516-2567
http://www.mediafusion.co.jp
---------------------------------------------------
Programmer
===================================================