OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

 


 

   Re: [xml-dev] Note from the Troll

[ Lists Home | Date Index | Thread Index ]

On Monday 28 October 2002 8:47 pm, J.Pietschmann wrote:

> > But that argument's not really valid. For a start an in-memory call might
> > be to a dynamic library that's not currently paged into memory - in fact,
> > the shared library file might have been deleted since it was linked in or
> > the stack overflowed! Ok, most current systems will kill the processes if
> > either of these events occur, but they could just throw a
> > GruesomeSystemException to give the code a chance to apologise to the
> > user. Indeed, the equivalent of a 404 in following a loosely linked
> > function pointer in a POSIX execution environment is a trappable SEGV
> > signal!

> I don't think it's that easy. The most feared problem in networking is
> the dreaded timeout. On the client side you simply don't have an idea
> whether the server got a GruesomeSystemException itself or whether
> someone pulled the wrong plug elsewhere. In eiter case you'll might
> get an answer later - too late (for CORBA et al.), or perhaps not (MOM).

Yep, but using HTTP doesn't solve that problem. The way most protocols solve 
it is to either take the easy option and run over TCP (which handles this 
stuff for you, but also wastes time providing other features that are a 
hindrance to RPC) or to use a UDP-based protocol with timeouts and 
retransmissions.

But you can never escape the old problem of the two armies; if you send a 
single request packet, never hear anything back, and retransmit for days 
without ever hearing back, you still don't know if the remote server got the 
packet or not. It might have died just after performing whatever horrible 
irreversible action your client code has just assumed didn't happen and 
re-requests later. It's trivial to prevent retransmits or packet duplication 
from causing duplicate activity in the server, just put a random number in 
every request and have the server screen out duplicates based on that.

> Another problem is designing stuff for networking. In particular
> CORBA and RMI make it easy to do stupid things like defining
>    interface customer {
>      attribute string firstname;
>      attribute string lastname;
>      attribute string birthdate;
>      attribute string income;
>   ...
> and then fill form fields individually with getAttribute()-methods. This
> might even work for the developer, but deploy this to 400 clients, and
> the result cannot be distinguished from DDoSing the server.

Exactly; no programming technique can make network calls as cheap as local 
calls. Not even using HTTP! I agree, though, that forcing the RPCs to be hard 
to do (by requiring conversion of your data to XML and poking it into HTTP, 
many lines of code) will force people to try to do it less, thus removing the 
above problem.

This can be solved at the protocol level, though. I'm currently working on an 
advanced RPC protocol for JAVA that, amongst other things:

1) Allows clients to cache get method return values, as long as they're 
properly declared on the interface as cacheable, like HTTP GETs can be cached 
by proxies

2) Using the same metadata it will lookahead the return values of nominated 
getters; when any of them are called, unless the result is in local cache, a 
single request for *all* of them is done and the results put in the cache to 
hedge against further requests. This is great for getName (), get Birthday 
(), etc but the interface developer will be encouraged not to enable this for 
getComplexDerivedValueOfUsingImmenseAlgorithm (), eg BigInt.getPrimeFactors 
(). The theory is based around the fact that the cost of a network call is 
usually dominated by fixed overheads than message size.

> Furthermore
> if the attributes have to be written back, everyone will ask for
> transactions. Which can be easily avoided. IMO the problem of lack of
> standards for transactions in certain networking protocols is the most
> overrated problem there.

Transactions get handier with increasing complexity of the data processing 
code, I find... if you have a 50-page algorithm intimately munging your 
important data, being able to rely on being able to undo it part way through 
when you run out of something or other (or if the network dies and things 
time out!) is pretty handy.

But the vast majority of traffic doesn't fit that category, tis true.

>
> J.Pietschmann
>

ABS

-- 
Oh, pilot of the storm who leaves no trace, Like thoughts inside a dream
Heed the path that led me to that place, Yellow desert screen




 

News | XML in Industry | Calendar | XML Registry
Marketplace | Resources | MyXML.org | Sponsors | Privacy Statement

Copyright 2001 XML.org. This site is hosted by OASIS