[
Lists Home |
Date Index |
Thread Index
]
From: "Mike Champion" <mc@xegesis.org>
> Here's just a couple tidbits from today's surfing
> http://www.bea.com/events/webservices/Bosworth_WSRC_Jan03.pdf -- an Adam
> Bosworth "vision thing" piece. Note p. 43 about XML databases (and
> obviously the middleware and applications they support) serving up XML
> messages/fragments/state thousands of times per second. How sure are we
> that bandwidth/parsing performance is not a bottleneck in such a scenario?
Databases servers manipulate data in some binary structure. The ideal for performance
is for a database server to offload as much rearranging as possible to the closely-coupled
middleware (i.e., in a typical three-tier architecture). For example, Oracle's JDBC
drivers present an standard interface on the middleware, but talk some specific
and presumably efficient way to the database server.
The traditional architectural answer to when you want to uncouple the front-end
from the back-end it to introduce a middle layer (or front-end processor to the server.)
Maybe the same is true even of Web Services*: the Web Service runs on
some middleware talking XML to the outside world, and that middleware should
talk some optimized proprietary code to its (tightly-coupled) server, with the interface
to the Database at the server being document objects (or whatever XML-ish thing
is needed.)
This is why I am a little puzzled by public standard binary formats: the strategy
needs to be to offload processing to a FEP or middleware server, and that needs
to be tightly coupled to the binary particulars of the database system. Which is
not to say that binary format X might not be just what the doctor ordered for
tightly-coupled server->middleware protocol from vendor Y.
Which makes me wonder whether one of the value of binary formats is to provide a greater
range of solutions to developers of tightly-coupled, probably-proprietary protocols?
Cheers
Rick Jelliffe
* and individual filters in a distributed pipeline
|