| [Thread Prev]
| [Thread Next]
| [Date Next]
| [Thread Index]
RE: [xml-dev] Where is XML going
- From: "David Lee" <email@example.com>
- To: "'Ben Trafford'" <firstname.lastname@example.org>
- Date: Sun, 5 Dec 2010 08:59:01 -0500
I must be working with different 'kinds' of documents then you (ben).
The space I work in primarily is the Clinical Information area. These have a wide variety of XML data (often from non-XML sources)
which vary dramatically in their 'human readability'. Some completely obtuse, through some that make reasonable sense if you stripped out the tags.
And everywhere in between.
But *NONE* of them could be reasonably presented by applying CSS type technology as-is, at least to the specifications of our product designers.
There's a lot of reasons why, but it's not just the complex ones. The *simple* ones have problems.
A trivial example is re-ordering. Text needs to be rendered in different order then document order (e.g. moved around).
Text Injection is another thing. Often there are references by ID values to 'outside data'. (Could be other XML data could be DB could be calculated).
This data needs to be extracted and put in, for example
Simple (made up but representative) example
Take <dose amount="10" unit="mg" freq="daily"/> of <med id="12345"/>
Take 10mg of <a href="/drugs/12345">aspirin</a> daily.
The important note here, is while the documents are *human readable* (ignoring the ones that are not),
they are *not presentable* without domain knowledge. And possibly without access to 'out of band' data.
This (and similar issues) has led to my assumptions/conclusion ... that simply extending the presentation paradigm will *not* solve the problem of 'XML On the Web". The existing toolsets are NOT capable enough to make the kinds of even simple transformations of XML to presentation even with some enhancements. And even given "human readable" XML documents.
Then there are the 'not so human readable' documents that are still "Documents" say the XML form of a Word document.
So I have concluded myself that *somewhere* some heavy-lifting need be done to expose this dark web into presentation.
And it can't be done with existing CSS type technology.
And its difficult to do *generically*, each document type needs different rules to transform them.
And that engine and rules have to live somewhere.
Now onto phase 2
It has been suggested that the Client is the place to do the heavy lifting.
But I'm concerned its not the solve-all solution.
Why ? Here's where I suspect we definitely come from different spaces, I work primarily in the mobile space.
Which tends to have me focus a LOT on things like download speeds, latency, and client processing power. But today's mobile is tomorrows desktop.
And networking issues are similar even today.
Now I know that mobile devices and cell networking is improving at a remarkable pace, but it's also true that desktops are diminishing and turning into mobile devices. Did you know there are actually more mobile browsers in the wild then desktop browsers today ? This trend will continue.
I've watched for decades as the pendulum swings back and forth between client heavy vs. server heavy architectures.
So given all that, my conclusions are that in fact
1) the client is *not* an infinite untapped resource of CPU power.
Rendering HTML strains most clients (desktop & mobile) to the extreme already.
2) The complexity of the transformation is beyond current built-in client/browser stack capability and may require external data.
3) Transformations need to change dramatically based on document and document type, and the rules for those need to change.
This means that the *code* to process documents to presentation may well exceed the document size.
And the ancillary data the code needs to do the transformation may not reside currently on the client (so has to be fetched).
And *that* data may be huge, so needs some efficient service to query so the data, and if there are lots of those requests,
then *latency* is a huge problem.
Some of this can be solved by special purpose apps downloaded ahead of time (what most Android and iPhone apps really are under the hood).
But to solve it 'in the wild' you can't be asking people to download a reader app for every web site & document type. (or can you ?)
These are a lot of issues, and I agree they don’t necessarily build on each other logically to form an absolute proof,
but In my mind they *weigh* on each other and *add up* to a reasonable conclusion.
That it is difficult, perhaps untenable, to expect simple enhancements to the client stack to magically make it capable of rendering the dark web of XML documents all on the client, in a presentable and efficient way. Thus I still hold that for now (5 years? past that my imagination is feeble) that the server is the appropriate place to do the transformations to at least a form that is *closer* to presentation structure.
Maybe not spit out the full HTML, but at least spit out something which is *easily and efficiently* translated to HTML on the client.
David A. Lee
From: Ben Trafford [mailto:email@example.com]
Sent: Saturday, December 04, 2010 10:33 PM
To: David Lee
Cc: Peter Hunsberger; Michael Kay; firstname.lastname@example.org
Subject: Re: [xml-dev] Where is XML going
I think we're coming from alternate points of view. You seem to be
approaching XML as human-incomprehensible data (the app developer
viewpoint) -- I'm approaching it as a human-comprehensible document.
There are numerous examples of both, but it's become increasingly common
for people to ignore the vast, unimaginable quantity of XML documents
that exist as human-comprehensible data. An example would be the
plethora of data that exists in aviation repair manuals -- literally,
hundreds of millions of pages worth of pure document.
There will always be room for server-based transformations et. al., but
that space is very well addressed by existing technologies. What is
extremely poorly addressed is the document to end user space, and
-that's- what needs to be fixed, in my opinion.
On Sat, 2010-12-04 at 22:15 -0500, David Lee wrote:
> Good argument
> But how does the browser know what the data means well enough to present it. ?
> I feel there is a difference of opinion re separation of concerns that is a fundamental rift in agreement in the community
> Sent from my iPad (excuse the terseness)
> David A Lee
> On Dec 4, 2010, at 10:03 PM, Peter Hunsberger <email@example.com> wrote:
> > On Sat, Dec 4, 2010 at 7:03 PM, David Lee <firstname.lastname@example.org> wrote:
> >> In my opinion the server is 'closer to the data' then the browser. It has more chance of knowing about the meaning of the data then the browser.
> > So? The browser is closer to the user, it has more chance of knowing
> > about the presentation requirements than the server.
> > --
> > Peter Hunsberger
> XML-DEV is a publicly archived, unmoderated list hosted by OASIS
> to support XML implementation and development. To minimize
> spam in the archives, you must subscribe before posting.
> [Un]Subscribe/change address: http://www.oasis-open.org/mlmanage/
> Or unsubscribe: email@example.com
> subscribe: firstname.lastname@example.org
> List archive: http://lists.xml.org/archives/xml-dev/
> List Guidelines: http://www.oasis-open.org/maillists/guidelines.php
| [Thread Prev]
| [Thread Next]
| [Date Next]
| [Thread Index]