Lists Home |
Date Index |
From: Robin Berjon [mailto:email@example.com]
Bullard, Claude L (Len) wrote:
>> This is likely the nut of the problem to be resolved. It is possible
>> that binarization gets the best results for each application
>> by becoming non-interoperable across applications.
>Yes, that is definitely something to be given thought to. However the same
>be said of virtually anything meant to interoperate. A number of features
>be dropped from XML for instance, but the hard thing there would be to find
>which parts to drop.
True and a situation that should not be repeated where possible. Yet I don't
think the comparison is valid in the sense that XML is a metalanguage and
interoperation there is more necessary and therefore less efficient than
what can be achieved in binaries for application languages but acceptable.
But what it means for the metalanguage handlers to interoperate is different
than for the application language handlers, and is different for static
content than for real time content. It will be interesting to see
if REST impacts the binary solutions.
>It is likely that an interoperable bInfoset format would be less efficient
>a highly specialised one. I'm pretty sure I could cook up something that
>compress better than most bInfoset formats (like XbMill) and lose speed,
>update, etc. But if I'm only gaining 10% on the interoperable format, it's
See above. Making XML processors interoperate is probably an order of
easier than making application language processors interoperate.
>It's all in figuring out the right trade-off, and I certain it's very
>well possible to find something generic.
To be determined. Compression is only one requirement and the easy one if
were not for the streaming and other requirements.
>Besides, considering losing interoperability may be an option in some
>today, but I see it as an increasingly costly decision as "webization"
>from outside documents (making things HTTP-accessible) to within them
>mixed-namespace documents). As the workings of multiple-namespace documents
>increasingly well understood, I wouldn't be surprised to see some
>mixes of SVG, XForms, X3D, various voice MLs, some music and sound control
>XHTML, all of those shooting off or carried inside Web Service message,
>lots of RDF sprinkled over. In fact, with the exception of X3D and the
>stuff, all of that is already visible in the SVG world.
We all understand how to mix document namespaces. The problem is mixed
object models for operating software. At the bottom of all of this is
the problem of mixing DLLs. Microsoft solved that with stylesheets too.
SVG is yet another HTML in that respect (which language is the outer
owner of the initial containers) and I don't think we can legislate a
solution via binaries. I think these are different problems of how
frameworks identify, find, and load handlers. Once again, people
who played with Chrome discovered just how hard this is when
efficiency is accounted for. One might argue that OBJECT tags
are about as far as that sort of thing should go.
>You aren't going to go far in there without an interoperable format :)
Formats don't interoperate. Code interoperates. Failing to make
that distinction will doom the effort from jump.
>> Considering all text nodes to be of the same content and
>> equal in value is false. The binarization approach taken
>> to indexed face set content (see X3D) and <p> may be
>> completely different.
>Very true. I'm tempted to place that as issue number one, the rest
>the structure) can multiple solutions but it ought to be simple enough to
Would that it were. "Being an Arab will be thornier than you think" as
the old fellow said in the movie. Do you like context dependent or
context independent parsing? One pass or multiple pass? And so it will go,
but yes, it is content that will make the big difference, not the XML
metatypes (eg, the infoset).
>Adressing the encoding of text nodes leads one to think of pluggable
>codecs, and all the associated interoperability issues. Some groups however
>been working on the issue (I believe notably around X3D) and I'm pretty
>can find something inside (abtract codecs) or outside (content negotiation)
>In fact, if HTTP 1.1 content codings took parametres, we'd have the answer
>already. Damn :)
If they did, they would be function calls and the REST would go away. :-)