Lists Home |
Date Index |
> From: Joshua Allen [mailto:email@example.com]
> Sent: Wednesday, August 20, 2003 7:11 PM
> To: Julian Reschke; Simon St.Laurent; firstname.lastname@example.org
> Subject: RE: [xml-dev] Postel's Law Has No Exceptions
> > Speaking from server implementation experience: just because one
> > server chose to be "liberal" (in that case, non-compliant),
> > competing servers are now forced to implement that buggy behaviour as
> That case (assuming you mean WebDAV servers) is not an example of
> "liberal in what you accept". If the servers in question are so
> liberal, why do they not accept perfectly wellformed input from other
> clients? If they are conservative in sending, why do they produce
> non-wellformed XML? Certain WebDAV servers seem like a perfect example
> of blatant disregard for Postel's law to me.
The issue is different. If server A (sold by a big company and widely
deployed) accepts broken requests, clients may start relying on that
behaviour. Other, smaller vendors thereby have the choice of either
implementing to the spec (rejecting the broken requests) or emulating the
broken server behaviour.
My point being, unless *everybody* is accepting the same kind of broken
requests, interoperability will actually be *worse*. But if indeed everybody
*is* accepting the same requests, it would have made more sense to actually
define this as *correct* behaviour and have draconian error checking.
> There is a difference between gracefully recovering from recoverable
> input errors and *requiring* input errors as a condition of functioning.
Yes, and as far as I can tell "recovering gracefully" is actually harmful
unless the sender is signalled that the request indeed was broken.
I think often people do not realize that the robustness principle doesn't
necessarily mean "accept as many broken requests you can", but just "expect
broken requests, and handle them in a sane way". Depending on the protocol,
the "sane way" may well be to reject the request:
Software should be written to deal with every conceivable
error, no matter how unlikely; sooner or later a packet will
come in with that particular combination of errors and
attributes, and unless the software is prepared, chaos can
ensue. In general, it is best to assume that the network is
filled with malevolent entities that will send in packets
designed to have the worst possible effect. This assumption
will lead to suitable protective design, although the most
serious problems in the Internet have been caused by
unenvisaged mechanisms triggered by low-probability events;
mere human malice would never have taken so devious a course!
Adaptability to change must be designed into all levels of
Internet host software. As a simple example, consider a
protocol specification that contains an enumeration of values
for a particular header field -- e.g., a type field, a port
number, or an error code; this enumeration must be assumed to
be incomplete. Thus, if a protocol specification defines four
possible error codes, the software must not break when a fifth
code shows up. An undefined code might be logged (see below),
but it must not cause a failure.
<green/>bytes GmbH -- http://www.greenbytes.de -- tel:+492512807760