Lists Home |
Date Index |
Robin Berjon wrote:
> On the other side we have a group of people that appear to think that so
> long as you have an abstraction cleanly defined, you'll get
> interoperability no matter how many concrete syntaxes you may have to
> deal with.
> Since I've been an XML-head for quite a long time, I have little trouble
> seeing the value in the first side.
Do you think it was a mistake of the W3C to not make XML have a single
concrete syntax, then? Is there pressure upon the W3C to reduce
backwards compatability in XML by disallowing the use of alternative
character encodings, in order to make it into such a single encoding?
I suspect you may face some resistance from the XML standards community
if you try to make it like so - considering that the changes to
whitespace characters in the Blueberry thing were all about making it
easier for the EBCDIC folks, it looks like there is a current consensus
that supporting non-US-ASCII compatible environments IS a priority.
> I'm interested in "experience feedback" from the second group, of which
> you are. Surely, there must have been some concerns about having so many
> ERs, about the overhead of negotiation, about cases in which it couldn't
> happen, about cases in which it failed, etc, no? If you were given the
> power to go back in time and be Supreme God of All ASN.1, how many ERs
> would you need, which would they be, and why?
An unbounded set, really...
Here's the situation.
1) The most general ASN.1 application was the OSI protocol stack, right?
In that stack, everything supported BER at least, and there was
negotiation to see if both sides preferred something else. So everything
was interoperable (thanks to BER being universal) and more advanced
encodings could be used where needed.
2) In more specific situations, the choice of encoding is either
mandated in the standard (Eg, LDAP uses BER - period)
>> So people should just drop the idea of introducing a binary alternative?
> Of course not, especially as people won't stop doing so with wishful
> thinking. But we're treading on brittle ground. The question at this
> point in time is not so much whether there are technical solutions to
> binarisation problems since those can be found twelve a penny out there
> (granted, with varying quality). It's about how it fits into a much
> larger system, at what cost, with what trade-offs.
I think that it's clear that people will *always* want alternative
syntaxes for things, for specialist purposes. They demanded it with
ASN.1, and they're demanding it with XML, too. Also note the variety of
character sets supported by MIME text formats, even the different
encodings of Unicode.
Therefore, it would seem best to embrace that by building in support for
interoperability between syntaxes at a low level (pluggable
encoding/decoding modules) rather than at a high level (conversion tools
the user has to invoke!)...