[
Lists Home |
Date Index |
Thread Index
]
On Monday 11 February 2002 21:33, you wrote:
> So the *real* question isn't whether we should use REST principals, or
> HTTP, or BEEP, or <whatever>. It's "what is the problem being solved"
> first and foremost. On other words, while REST is applicable to
> distributed hypermedia (but even then, there are issues with URI's as
> commonly used), it is an open question whether it is well-suited for
> business transactions etc. I personally think there are probably a
> *large* number of problems that can be solved well using REST, but
> there are a set of problems that aren't really well suited to it
> (except in the most abstract sense).
There is a lot of importance and mindshare attached to atomic transactions
across a number of client->server or peer->peer operations, for example...
I've heard a lot of people say that RPC (which I consider a superset of
RPC-with-session-state-including-transactions in this case) is bad because
it's not scaleable; HTTP or something like it embodying REST should do for
everything... I'm certainly of the opinion that REST is handy for idempotent
thing-fetching, like DNS and browsing information, but it'd be nice to be
able to suddenly open up a session with a resource and use server-side state
when it's sensible to do so!
That's my main bugbear with "HTTP for everything"; I'd like a protocol that
handles REST as one particular case.
I'm working on a model for this under the name of Mercury (I have a periodic
table fetish); the idea is quite simple at heart. A network-accessible object
exposes a set of Interfaces, named with something unique like a URI; each
Interface is a set of Methods (Are we object-oriented yet, kids?) with simple
string names unique within that Interface (I won't say namespace out loud,
but I'm thinking it).
Each Method has a type - unreliable send, guaranteed send, send/receive,
call, open session.
All take arguments. How you define those arguments - a MIME encoded request
body, a bit of XML, a serialised list of Java objects - is irrelevant to the
architecture.
Send/receive and call return something - again, how that something is defined
is another issue.
Send/receive is like GET, and cacheable and idempotent. Call is like POST.
Unreliable send is a lightweight *scalable* way of asynchronously sending a
message without bothering wasting resources on guaranteeing receipt - a
single UDP packet will do nicely!
Guaranteed send will, depending on the API, either block until an ACK is
received (or return with an error code if it times out), or do it
asynchronously and, if the send timed out, put the request back into a queue
for return to the application as 'undelivered'.
I'm getting mired down in the details before getting to the interesting bit,
sorry - the open session type of method returns a result body, as before, but
also returns a *session handle*. Ideally, the protocol stack should handle
sending keepalives every minute or so from both ends of the connection so the
new peers can know if the other end dies (important information, that).
Anyway, the server can, in the handler for the open session method, refuse as
instead of accepting. The result body can be used to indicate why.
An open session (from either end) looks just like a remote object; it has a
list of Interfaces... the only difference is that the handler code at each
end, if any (many clients will export a null interface set when they open
sessions, unless the server needs a way of asynchronously requesting
information from the client, notifying it of events, etc), gets passed a
session handle whenever a new request comes in, which it then uses to
reference the session state. Or the implementation of the session open method
can create a new object to be the session state and pass it to the protocol
stack to pass back to the stateful interface handlers or something.
Sorry, getting stuck in the details again. To summarise, I think it's...
wrong that we have the current approach to designing protocols; we have TCP
and UDP and RDP and so on, all with their own session establishment protocols
and port numbering schemes, then SMTP and HTTP and NNTP and so on, all with
their own *addressing* schemes and data models.
I'd much rather have a unified addressing scheme and information model
(combined together, are these not an object model? Sorry, another thread...),
then put the details normally associated with 'protocols' on top of that...
ABS
--
Alaric B. Snell
http://www.alaric-snell.com/ http://RFC.net/ http://www.warhead.org.uk/
Any sufficiently advanced technology can be emulated in software
|