[
Lists Home |
Date Index |
Thread Index
]
- From: "W. E. Perry" <wperry@fiduciary.com>
- To: xml-dev@ic.ac.uk
- Date: Thu, 06 May 1999 01:21:33 -0400
Jonathan Borden wrote:
> Furthermore I am suggesting that by enabling use of web native protocols
> under currently available distributed object systems we can move toward a
> better integration of distributed objects and the web.
This is an admirable goal, and one which I believed in and pursued for some time. However,
'currently available distributed object systems' were designed for closed enterprise networks
and have proved (are proving?) to be utterly unsuitable for the web. The web remains
overwhelmingly text-based (HTML mostly, of course). Objects are fundamentally inimical to the
text-based assumptions underlying most of what is done on the web. The history of non-text
additions, from .jpeg and .gif, to real audio, shockwave and flash has been in essentially the
opposite direction from interoperable objects or distributed components. Simply put, the
object-and-interface paradigm has not been well received in an open market. I think that this
was Dave Brownell's point, from a somewhat different perspective, earlier in this thread:
> Consider that no RPC system in the world (CORBA, ONC, DCE, etc) has had
> the reach of some rather basic non-RPC systems like E-Mail (SMTP, POP,
> IMAP) or the web (HTTP, HTML, XML, etc). For folk who have spent a lot
> of time working on architectural issues, this is telling: it says that
> there's quite likely a problem with the RPC approach.
>
It may be a reasonable argument that there are some (few) cases where a truly authoritative
object should be invoked via an RPC mechanism--perhaps to use the inventor's own
implementation of a process or an algorithm. In most real-world cases I am familiar with,
however, the marshalling and de-marshalling are simply too expensive and potentially
error-prone to be competitive alternatives to performing locally the processing for which the
local node is responsible. In the best examples we have of truly distributed systems in
production today, each local node may have no idea of how, by whom, for what, or in what
context the output of its local processing is used by any other node (or for that matter what
processes its input comes from--an essential consideration if its own processes change in a
way which affects how input must be marshalled to them). Efficient designs concentrates on the
performance which can be achieved in those local processes and will not burden the local node
with knowing too much about other nodes in the system, especially since that knowledge can go
out of date quickly and in ways which may not be easily discoverable.
The promise of XML for solving these problems stems in part from its differences from
objects: primarily, where objects are opaque and require prior knowledge of their interfaces,
documents display openly not only the data they might convey but also the structure described
by their markup. I would agree with Tito Ingargiola:
> You're better off in nearly all cases simply firing a stream of XML at whoever needs it. I think that
> one could happily integrate all sorts of wonderful XML-derived benefits into a CORBA environment
> by having both stream and remote invocation interfaces
>
though my own preference (and the best solution in the overwhelming majority of the situations
I see) is for the stream of XML over the remote invocation interface.
Walter Perry
xml-dev: A list for W3C XML Developers. To post, mailto:xml-dev@ic.ac.uk
Archived as: http://www.lists.ic.ac.uk/hypermail/xml-dev/ and on CD-ROM/ISBN 981-02-3594-1
To (un)subscribe, mailto:majordomo@ic.ac.uk the following message;
(un)subscribe xml-dev
To subscribe to the digests, mailto:majordomo@ic.ac.uk the following message;
subscribe xml-dev-digest
List coordinator, Henry Rzepa (mailto:rzepa@ic.ac.uk)
|