[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
RE: Images embedded in XML
- From: "Al B. Snell" <firstname.lastname@example.org>
- To: Danny Ayers <email@example.com>
- Date: Sun, 08 Apr 2001 02:54:50 +0100 (BST)
On Fri, 6 Apr 2001, Danny Ayers wrote:
> <- Hmmm, it'd be nice if XML was more compact, faster to parse, and let you
> <- embed other data streams more easily - do we *have* to make it
> <- human-readable UTF-8?
> It wouldn't be extensible or markup - I suppose you could just call it 'L'
It would be extensible, since any XML document could be encoded in it and
vice versa; and it'd be a data representation language as opposed to a
markup language being used for data representation... I'd prefer it to be
simple enough to work with to make it a "format" rather than an entire
"language", so let's called it XDF :-)
> generation and reading another. All are done with binary formats in lots of
> systems, but to be able to use these completely across the board you need to
> go to a pretty low common denominator such (e.g. plain text).
This is a bit of a myth... plain text is less of a common demoninator
than two's complement or unsigned binary integers, since plain text is
described in *terms* of this, and XML is far from a simple "lowest common
denominator" data format; it would take me a few minutes to write a set of
routines in C to serialise and unserialise data in network byte order, and
perhaps an hour at most to implement this for a self-describing
format. Compare that to how long it takes to implement an XML parser, and
the size and run time space/time requirements, and the size of the XML
documents compared to the "XDF" records...
Since parsing XML is complex, XML's adoption will be limited by the
development rate of XML parsers... I have heard a few people say "Ah, I
can write an XML parser in 10 lines of Perl", but those parsers don't
process entity references or handle namespaces :-)
> Either you do
> without compatibility between systems or you sacrifice a bit of speed.
Only a tiny smidgen... many CPUs can do network / host byte order
translation in a single instruction; compare that to the time taken for an
XML parser to locate a text node by stepping through to find delimeters,
then removing the whitespace and converting from UTF8-decimal to an
integer in host byte order...
> are compromises though - you could have a reference in your markup to a
> binary file (e.g. a .jpg) and the processor could receive this separately
> from the markup, as in HTML browsers.
Yes, but this is a kludge; XML isn't good enough to realistically embed
binary objects inside it, so they have to go over a seperate connection
with some complex referencing mechanism.
XML is posing as something suitable for forming the core of many systems
of communicating software modules, yet it is incredibly unweildy compared
to much simpler to use and implement techniques of precisely the same
expressive power; I don't want to start a flamewar, but is this *really* a
wise application of XML? Shouldn't it stick with replacing SGML/DSSSL/HTML
with XML/XSLT/XHTML and remain in the realm of documentation systems,
which it is much more applicable, than all this XML-RPC and SOAP
Alaric B. Snell
http://www.alaric-snell.com/ http://RFC.net/ http://www.warhead.org.uk/
Any sufficiently advanced technology can be emulated in software