[
Lists Home |
Date Index |
Thread Index
]
- To: <xml-dev@lists.xml.org>
- Subject: Another way to optimize XML
- From: "Stephen E. Beller" <sbeller@nhds.com>
- Date: Wed, 14 Apr 2004 17:26:11 -0400
- Importance: Normal
- Organization: NHDS, Inc.
Based on what Stephen said (below), I offer a model I've invented for
optimizing XML to render large data sets as charts, tables, etc; it:
1. Converts XML documents to a delimited text file (e.g., CSV) prior to
transmission to clients
2. Combines the XML data elements with relational database dumps/queries,
legacy system flat file reports, and even OLAP data [Optional]
3. Manipulate/process the data (data mining, statistical analysis)
[Optional]
4. Organizes the CSV's contents into logically/semantically configured
arrays optimized for rapid rendering
5. Transmits the CSV to clients for local storage, parsing and rendering.
This process:
(a) Transforms XML documents into the smallest possible file (as much as 25
times smaller) for transmission and client-side storage
(b) Enables integration with other data sources
(c) Simplifies and speed the parsing process
(d) Requires minimal processing and memory overhead
(e) Uses semantics/pointers based on logical data structures (column/row
locations)
(f) Removes the threat of viruses from the file since delimited ASCII text
files are virus-proof
(g) Offers an additional level of security coming from the ability to
"scramble" the data prior to transmission, then unscramble the data
client-side.
If presentation is done via COM, a macro-driven spreadsheet quickly and
easily renders the CSV contents client-side to provide offline-interactive
charts, etc. Users could, for example, slice, dice, and drill-down into the
data instantaneously using pivot tables. They could also generate hundreds
or thousands of completely different charts in seconds, and jump from
viewing one to another instantaneously. Furthermore, since all the data are
stored locally in the CSV, and since data updates are extremely rapid due to
the very small size of the CSV, the information is portable and requires
minimal online time, which is ideal for mobile users.
For browser presentation, the CSV's contents are used client-side to
generate graphic images of charts (e.g., gifs) and create an XML document,
which is then be transformed via XSL it into XHTML, etc. We're now looking
for a way a browser can render the CSV client-side without having to
transform it back to XML.
Does this sound like a valuable XML optimization model option for large data
sets?
Steve Beller
-----Original Message-----
From: Stephen D. Williams [mailto:sdw@lig.net]
Sent: Monday, April 12, 2004 3:16 PM
To: bob@wyman.us
Cc: 'Michael Champion'; ''xml-dev' DEV'
Subject: Re: [xml-dev] XML Binary Characterization WG public list available
+1
My design does all of the things you mention, and more.
I disagree however about optimizing multiple aspects: You must optimize
on as many axii as are appropriate to you if you want a solution that is
the best combination of tradeoffs. It is more difficult to do this of
course, but anything else doesn't meet the requirements, my requirements
at least.
I do of course have strong ordering of what is important, in this order:
CPU processing overhead, memory processing overhead, new semantics for
libraries (fast pointers to support any logical data structure, deltas),
storage/transmission space efficiency, support for binary payloads. This
has led me to consider solutions that don't seem to have been tried
seriously before such as avoiding parsing and serialization altogether
for my main mode of usage.
sdw
|