|
Fat clients/thin data (Re: [xml-dev] RE: Why can't we all work together?
|
[
Lists Home |
Date Index |
Thread Index
]
Bob Wyman wrote:
Claude L Bullard wrote:
Why would one want to use a fat client on the web?
Are these really fat clients?
Virtually *everyone* uses a "fat client" everytime they access
the Web. What the heck do you think Internet Explorer is? Are you
suggesting that it is "thin?" (No, it's one of the "fattest" clients
you can find...)
For the Web, I don't know if the fatness of the client is more
important than the thinness of
the data.
XML-based languages have the three architectural problems which inhibit
their chances of
successfully replacing HTML for general web use:
* Draconian error-handling (DEH) implies that applications won't
progressively render the
XML to the screen until it is fully arrived, and, if the applications
do progressively render,
DEH implies that a page rendered from an erroneous XML document should
probably
be withdrawn. Difficult to see how this can be made pleasant for large
documents.
* Even if a streaming implementation is made, the latency before
starting to render can blow-out
whenever there there are subordinate resources needed to interpret or
complete the current
document (DTDs, esp. entity declarations, schemas, XSLT.)
* And when these subordinate resources are expressed in uncompressed
XML, the time blows
out anyway. Client-side XSLT, client-side Schemas and client-side
entity declarations all fail
for the same reason: the size of the suborsinate resource dwarfs the
size of the data.
Now, of course, using a simple CSS stylesheet can result in a big
saving compared to an HTML
file. But one nice feature of JavaScripts, say, is that the page
renders happily while the
behaviour is being downloaded. Less fragile.
Lots of the world is not (North Americans may be astounded by the
quaintness!) connnected
by fast lines. Indeed, the number of routers that data has to get
through to get to
other continents means that there is a high probability that packets
wil be lost for WWW
traffic between distant points. (Looking at router statistics
forvarious countries, it is
not uncommon to see 20% packet losses from time-to-time at major
internet routers.)
The smaller the data size, the more chance that any particular page
will get through without
requiring retransmits (or lost ACK timeouts).
We know that people are impatient if a casual page (or a page with low
anticipated value)
takes a long time to load. But people are willing to wait a certain
amount of time for
higher-value pages to load (Flash, incremental PDF, etc.) So
realistically it seems that
anyone who is worried about whether XAML is a near-term threat to HTML
should
look at its relative error-handling, latencies and size performance
characteristics.
To put this another way, think of progressive rendering of graphics
files: I think most
people who are not on fast lines find progressive rendering useful. It
is a good
feature of HTML, because it scales well: busy servers can flush out as
mch as they
can and the punter gets as much as possible on their screen as soon as
possible.
I suspect any successful XML-based attempt to replace HTML needs to be
architected along lines that allows progressive rendering and low
latency:
some instant visual framework should come first, then styles, then
embedded static
media that are visible, then metadata and behaviours, then streaming
media, then other static media/behaviours/styles for non-visible parts
of
the same document, then rpeloadinged rsources for caching purposes.
To an extent, the idea that compression or binary transmissions will
fix up
XML for this kind of use are wrong. The chunks of information that get
downloaded need to be queued by how much they are needed to render
the first screen as soon as possible. HTML gets this right (terse
markup, non-Draconian error-handling, terser stylesheet language that
works with the streaming context, parallel downloads of JavaScript
or behaviours, multiple parallel downloads of visual decorations,
fast display of primary text such as the first heading.)
Cheers
Rick Jelliffe
P.S. Len, this is a definite issue for an XISMID design: the IETM needs
to
be physically available as one page, one for each URL. (And any
content-dependent information models used must have the basic
streaming discipline of never requiring forward references to perform
rendering.)
|
|
|
|
|