Lists Home |
Date Index |
Short form: some applications need a binary. The
may not need XML and may not benefit adequately from
a binary form of true XML, but since they choose XML
and they need a binary, they may not get what they
need from XML specifications and will pursue independent
development anyway. Control should only be applied
where control is useful and that is my input to the
W3C binary interoperability group. Don't develop what
won't be specifically useful if it won't be generally used.
Long form follows:
From: Michael Rys [mailto:email@example.com]
>If you are concerned about interoperability in loosely-coupled systems,
>then more than one interop format is bad. I define the scenario as: I
>publish information, that is being used by applications that I have no
>knowledge and control over and applications operate on data from sources
>that they do not know and have little control over (except for coaxing
>them to provide some data) without having to perform lots of apriori,
My problem with that definition is that it ignores certain facts. A
format handler needs to be predictable; otherwise, how would I know
which one to buy? So an HTML handler renders HTML and can do it
loosely, but the market doesn't like that and spent a lot of resources
trying to lock down the presentation through extended means such as
CSS, and in many cases, simply surrendered to IE. This is not a critique
of IE; it is just an obvious fact. In the case of VRML, failing to
get rendering and behavioral fidelity (and these are not the same),
caused the same behavior in the market. Eventually Cosmo dominated
although the market collapsed. With X3D, the designers learned their
lesson and created a standard based on the abstract object model
and then and only then the encodings. In effect, for some classes
of application, loose coupling is a myth. What is not known and
is not effectively knowable is what is supported by a given platform
at a given location when a user at that location selects a page to
download. XML didn't solve that problem and can't.
>If you start having more than one format available, then you start to
>have to support more than one format on both the client and the server,
>start to have some negotiation protocol to say, what format you prefer
>etc. If you only have one format, then this becomes much simpler, and in
>my opinion often more efficient.
Simpler yes, but efficiency is a local politic controlled, in some cases
by a local policy. In short, it varies by application. There is not
an efficient one size fits all, just a one size fits most even if not
comfortably, somewhat like the old Russian fashion show commercial.
It can be more efficient but the application designer is not relieved
of the creation of the abstract object model particularly if the
handler has both dimensions of rendering and behavioral fidelity.
>Also note that at least communication overhead in distributed
>environments can be addressed by lower level compression formats "on the
>wire" such as MNP-5 that are transparent to the transported XML.
Noted in the early VRML debates on binaries some years ago that settled
on gzip because it was the sweet spot at that time, or so we thought until
the customers began to rally for a binary.
>To address your cases:
>1. Real time 3D: Do you really consider this a loosely-coupled scenario?
>It depends. If you can live with network latencies etc, it may well
Network latencies are only a big problem for updates in shared worlds.
Even for monoworlds, the size of the VRML/X3D file is mostly the textures
and these have
to be cached in local libraries (eg, Universal Media) or downloaded.
For that reason, X3D and others have a "Start when loaded" feature.
XML isn't the problem as you note, nor is it much advantage. It adds
size but since the infoset abstraction isn't part of the X3D specification,
not much else. Even editor support is dicey because graphics editors
rely on hand to eye identification and recognition. It is a touch
and feel art similar to layout in a page renderer, but much more
>However, in that case, the XML on the wire is a small
>part of the cost. If you want to repurpose VRML for other uses than real
>time 3D, you should be happy about the use of XML and see it as a
Can and do. Repurposing is dodgy though when a format includes
behaviors. Thus the MID. Thus XAML. VRML/X3D mixes behaviors
into the client language and for real time, that is essential.
I think if anyone starts attempting to make libraries of repurposable
XAML, they'll encounter similar issues.
>Also without knowing what they consider the speed bottleneck
>and the general scenarios beyond VRML, it may be that a binary format
>just doing real time 3D may be better than XML. But I have a feeling
>that the real perf issue is not the parsing of the XML, but the general
It isn't the parsing typically. The problems of real time 3D are
in synchronization in a multi-player model of operation, and keeping
rendering rates around 15fps in the low range, and at least 30 at
the sweet spot. Consider that real time 3D simulation has to
preserve the 'reality paradigm' in games, so loading bits into
and out of the scene without breaking the action to resync all
of the clients is a bear. Think of the scenes in the Matrix
where they freeze the action so the machine can catch up to
the human's unpredictable actions. So one gets speed wherever
one can find it and the parsing is not offlimits, but is as
you say, a tradeoff.
>2. No. XAML is using the XML representation to be interoperable and
>usable in a loosely-coupled scenario. The application that compiles XAML
>into a UI is only one application. You could potentially use the XAML
>format for totally different purposes (like transforming it into another
>XML markup that does the same or something different). The compilation
>aspect is not part of the discussion about using "binary XML" vs true
Interesting and noted. That was the MID reasoning as well, but compilation
for performance was required. Interpreting MID and I assume XAML, is a
non-starter outside the editing suite.
>3. See above. In addition, couplings that are more tight normally tend
>to be based on a controlled environment where specific additional
>protocols exist to exchange information. In those contexts, it may make
>sense to use some scenario specific binary encoding.
See response above. I think you are being disingenuous. One can disregard
these considerations precisely because the controls have already emerged
in the forms of standards and specifications apriori. It is the
use or reuse that your description applies to. XML is a hedge against
>For example, a database server that serves XML through a variety of DB
>APIs may decide that it off-loads some of the serialization of the XML
>to the API layer (or even only expose a reader interface). In that case,
>if the client communicates that it understands the binary format
>provided by the server in some way or the server says, if you are API X
>I trust you to understand the binary format, then you use binary format,
>but otherwise I give you the textual XML. However, this is a
>tightly-coupled system since the server and clients know each other, the
>user of the API still sees only the XML and the binary format is
>optimized for this scenario (which may not work in another scenario).
Note that you are dropping down a scale dimension or two for your
example. Client/server != web client/web service. I'm not sure
the same is true above the scale of the local system. In fact,
the more one is loosely coupled, I think the more one has to be
negotiating. It is just the humans who are negotiating offline
and the machine therefore, can blithely be unaware.