OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.


Help: OASIS Mailing Lists Help | MarkMail Help



   RE: [xml-dev] RSS beyond the Blog: 1992 or 1999? - was Re: [xml-dev] hu

[ Lists Home | Date Index | Thread Index ]
  • To: "Michael Champion" <mc@xegesis.org>,"XML DEV" <xml-dev@lists.xml.org>
  • Subject: RE: [xml-dev] RSS beyond the Blog: 1992 or 1999? - was Re: [xml-dev] hurry GenX...
  • From: "Dare Obasanjo" <dareo@microsoft.com>
  • Date: Thu, 18 Mar 2004 12:57:04 -0800
  • Thread-index: AcQNHj9ogqiiJxeaQEWiY/K7YekFmwADFxDx
  • Thread-topic: [xml-dev] RSS beyond the Blog: 1992 or 1999? - was Re: [xml-dev] hurry GenX...

I can already get my bank account and credit card information over the Web using HTTP+SSL. So what is so crazy about my bank deciding to give me the results as an XML document instead of an HTML one? All this talk of 80/20 solutions and bubbles confuses me. 
The only people who I've seen claim that syndication isn't capable of doing this are people slinging FUD. 
Of course, there are a lot of serious questions to answer which have nothing to do with how you escape HTML or provide relative links even though the various Atom boosters keep claiming that these are the *important* questions to answer 
1.) Does your bank/credit card company want to deal with the increased traffic from providing RSS feeds  [even if they use compressed feeds and conditional get it is still a lot]?
2.) Will enough customers want to use this feature to make it worth the expense? 
3.) Will aggregator authors know how to securely protect user passwords so they aren't stolen by malicious applications? Digging in the config files of an aggregator becomes more tempting if you think you can find credit card numbers and bank account passwords in there. 
Blessed are the meek for they shall inherit the Earth, minus 40% inheritance tax. 


From: Michael Champion [mailto:mc@xegesis.org]
Sent: Thu 3/18/2004 11:19 AM
Subject: [xml-dev] RSS beyond the Blog: 1992 or 1999? - was Re: [xml-dev] hurry GenX...

> http://www.linuxworld.com.au/index.php/id;1044711497;fp;2;fpid;1

Sorry if this is a bit off topic (now that we've actually been
discussing xml development for a change...)

'"I would like to have an RSS feed to my bank account, my credit card,
and my stock portfolio," he said.
  "Anybody who has gone very far into this is starting to get very
excited," he said of the RSS phenomenon. "It's starting to feel like
1992 or 1993, when this Web thing was starting to stick its head out."'

Hmm, RSS feeds for credit card statements sounds more like a 1999
(internet bubble) idea than a 1992 (Web hitting critical mass) idea to
me. Let's think about why does RSS (all flavors, including Atom, and
including the infrastructure of HTTP and aggregators, not just the text
format) works so well to track news feeds and weblogs, then see what
that implies for bank accounts and credit cards:

- It hits the 80/20 point dead on.  It's so simple you don't even need
agreement on much of anything except an approximate template for what a
news item looks like, and the news industry has evolved a pretty good
one over the years.  Details about what to name tags, which date
formats to support, can be argued over forever, but all permutations
can be supported in code with relatively little trouble.

- It scales by exploiting the Web architecture and the power law
phenomenon.  Popular feeds are cached in servers, intermediaries,
mirrors, etc. using exactly the same technologies used to support
popular websites.

- It ultimately contains human-readable text; all ambiguities about
what, where, when, why, how in the content or metadata are resolved by
the human reader.

- Father Darwin performs his brutal magic -- hacks that work evolve
into best practice, and ultimately (IETF willing) get standardized; 
those that don't work  die quietly.

Yep, that sounds a LOT like 1992 to me.  It has created an
infrastructure of polling aggregators that has created a critical mass,
so lots of things that used to be done with mailing lists can now be
done quite nicely as RSS feeds to get information to interested parties
quickly and efficiently.  The problem I have is in trying to make this
work as a universal pub-sub system: The things that make RSS et al a
winner for casual, public text make it a loser for critical, personal

- 80% of the capability/reliability doesn't cut it. If I miss an news
flash about the latest horrors out of Iraq or Washington, or an update
to my favorite weblog, c'est la vie.  If someone is draining my bank
account I want to *know* that reliably and immediately by a credible
and authoritative mechanism, not an 80/20 solution.  (Given Len's line
of work, this is probably why he's so vehement about Mr. Pareto's

- It's not going to scale if I'm the only consumer of a "feed."  A bank
isn't going to appreciate having all their customers ping them every
few minutes to see if anything changed, and Bloglines won't help unless
there are lots of people subscribing to a given feed.

- It contains data, and the machine processing it will need to have
"intelligence" that is beyond the state of the art to resolve
ambiguities, or it had better match an authoritative standard that
stupid machines can easily map  onto existing application semantics. 
Anyone who thinks that there are likely to be authoritative standards
here doesn't follow the RSS 0.91 / 1.0 / 2.0 / Atom discussions, or the
somewhat more genteel standards warfare going on in the Web Services

- Father Darwin (well his alter ego the Invisible Hand) will indeed
whack the un-successful, but not before a new generation of Enronesque
scammers and pill-purveying spammers do their worst.  The consequences
of failure when real money and confidential information are at stake
are vastly worse than missing an update to a favorite weblog or the
latest horror stories from Iraq or Washington.

I'm all for leveraging XML and the Web out of the box for things that
it is well-suited for (and the current RSS ecology is one of them).  I
get nervous when bubbles start inflating.  In 20:20 hindsight, it was
obvious that the Web is a far better place to sell books and run
auctions than it is a place to sell petfood or groceries. I hope we can
apply some 20:20 foresight  here and figure out an *appropriate*
combination of XML+HTTP, whichever Web services specs that actually do
something useful, and the real world enterprise infrastructure that can
be applied to the opportunities  that Tim mentions.

The xml-dev list is sponsored by XML.org <http://www.xml.org>, an
initiative of OASIS <http://www.oasis-open.org>

The list archives are at http://lists.xml.org/archives/xml-dev/

To subscribe or unsubscribe from this list use the subscription
manager: <http://www.oasis-open.org/mlmanage/index.php>


News | XML in Industry | Calendar | XML Registry
Marketplace | Resources | MyXML.org | Sponsors | Privacy Statement

Copyright 2001 XML.org. This site is hosted by OASIS