XML.orgXML.org
FOCUS AREAS |XML-DEV |XML.org DAILY NEWSLINK |REGISTRY |RESOURCES |ABOUT
OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index]
2007 Prediction: A Disruptive Technology

Len Bullard writes:

"...if the performance sucks, the fun goes out of it and then it becomes a
boring repetitive and largely uncreative means of embedding moving gimlets
inside infinite texts.The globally shared information space does not exist
because of the browser. It exists because of the largely shared and limited
dimensionality of URI-based linking ... the truth is that thin is skinny and
server-side is fat and in combination, they are still slow and not that
entertaining . you may want to start looking at browser-less applications,
rich in controls, rich in content . they can now play in a very
entertaining, fast moving, ever changing and even financially rewarding
world ... I don't want to build apps as good as Excel."

Noah Mendelsohn writes:

"Absolutely you get richer services and better performance on any given
platform by using the native services of a well designed OS... Nobody in
their right mind would implement the UI for an application like Excel purely
in HTML ... [The] Web/HTML/XML stuff is so valuable...because of the shared,
global information space that is the Web."

I've been arguing these points for years. It reminds me of two threads we
had going three years ago:

1. A thread started by Len in Dec '04 -- subject="The XML Backlash"
(http://lists.xml.org/archives/xml-dev/200412/msg00157.html and
http://lists.xml.org/archives/xml-dev/200412/msg00241.html) -- was prompted
by an eWeek article titled "Users Adjust to XML Tax on Networks"
(http://www.eweek.com/article2/0,1759,1732909,00.asp), which focused on XML
inefficiencies: "Specifically, the extra processing power required to handle
parsing and processing XML can be a strain on systems. In fact, according to
a report issued this month by ZapThink LLC, XML is starting to choke the
network from a bandwidth and processor perspective."

2. Another thread that same month, subject="Data Streams"
(http://lists.xml.org/archives/xml-dev/200412/msg00218.html) discussed the
value of tagging each data element in a long stream, rather than using a
highly efficient delimited file, such as CSV.

The argument put forth by XML advocates boiled down to this: Deal with XML
performance problems by: (a) buying more/faster hardware, (b) increasing
bandwidth, (c) continuing to work on software parsing optimization, and (d)
using compression on XML files. These strategies, they claimed, would help
with performance problems, while enabling us to maintain attribute
definitions and hierarchical associations that a delimited text file would
lose. 

In response, I mentioned I was working on a novel use of a COM (MS Excel)
application to create CSV data files that maintain the attributes and
hierarchies of XML, which I claimed would solve the performance problem when
working with structured data, especially numeric data. I had completed the
bulk of a prototype that converts XML to CSV using Excel macros; it can also
convert the CSV back to XML if desired (although the CSV -> XML is a lengthy
process). The CSV contains the XML Schema-defined attributes and even
maintains the hierarchies between elements. It even allows for easy
modification of tag names.

But, two years ago, before we developed a shrink-wrapped product, my company
shifted its focus to other things, so the prototype has been gathering dust
since then.

This current thread motivated me to write once again and present more
details about our discontinuous innovation (disruptive technology), which
consists of an asynch, publisher-subscriber, node-to-node architecture with
a patented underlying technology that takes advantage of XML's power while
using desktop, rich-client COM applications and CSV data files (or delimited
data streams). It is the only software codec that uses an encoder to
organize data elements into configurations (i.e., data formations such as
jagged arrays) and uses a decoder to locate the content elements for data
processing (e.g., formatting) based solely on their positions within the
configurations, without using database queries or markup tags. 

If the data source is in XML, the XML file is consumed by a node whose
publisher functions parse the data and transform the data values and element
attributes for storage in a CSV file, which is transmitted to n subscriber
nodes as encrypted e-mail attachments (or FTP, data streams, etc.). The
subscriber nodes then consume the CSV data file (or streams) and render its
contents using formatting templates. It offers a unique set of benefits that
includes and extends beyond performance gains.

Here's a link to our new blog, which describes this technology
http://cpsplit.typepad.com. While it focuses on applications for healthcare,
it is applicable to any industries. We are looking to form business
relationships with other software vendors, so please contact me if you're
interested.

Steve Beller, PhD
President/CEO - National Health Data Systems, Inc.
sbeller@nhds.com






[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index]


News | XML in Industry | Calendar | XML Registry
Marketplace | Resources | MyXML.org | Sponsors | Privacy Statement

Copyright 1993-2007 XML.org. This site is hosted by OASIS