[
Lists Home |
Date Index |
Thread Index
]
It's exactly this kind of thinking that led XML Schema to the position that
validation failures are not fatal, they simply cause parts of the document
to be marked as invalid. A position which (in the interests of avoiding
excessive complexity) the schema-aware XSLT and XQuery specs have not
followed through.
Michael Kay
http://www.saxonica.com/
> -----Original Message-----
> From: Bryan Rasmussen [mailto:bry@itnisk.com]
> Sent: 26 July 2005 15:08
> To: xml-dev@lists.xml.org
> Subject: [xml-dev] theories of media languages and error handling
>
>
> There is an assumption one often encounters in
> implementations for media (as a
> reference media I will focus on hypermedia in the modern
> browser), this
> assumption is opposition to a general assumption for
> validation of data for
> media, the media implementation assumption could be put as follows:
>
> The absence of an object does not cause the failure of the
> whole. This means
> that as a general rule if I refer to some object that the
> browser cannot find
> the browser is not designed to fail, the browser assumes that
> other objects
> that
> it can find are still useful to the user and do not present a
> faulty instance
> to
> the user (sometimes of course the browser does fail but such
> failures at
> missing
> components seem always to be due to bugs in the browser and
> not required
> presence)
>
> As an example of this assumption - a reference to an image
> that the browser
> cannot resolve, this is generally the same behavior in printing etc.
>
>
> I am in total aggreement with this assumption.
>
> The assumption for validation of data for media is often as follows:
>
> strict requirements for structure prevents failures in your
> media presentation.
> But of course that someone has put in an element referring to
> an image does not
> mean the image is placed in the page.
>
>
>
> In a way we can define the components of a media instance as
> being loosely
> coupled. How though has it come to pass that this is so? Is
> there any theory out
> there or do people have theories? I suppose the pedestrian
> reason is that media
> itself is dataless and any media format must allow decoupling
> of individual
> media elements because we cannot know what their meaning is
> without the data
> context. so that if one had a true xml browser that was
> semantically aware we
> would be able to crash whenever a document without a required
> image was
> enquired.
>
> I am of course aware of the oodles of theory on strict
> validation of document
> structures and so forth and why failure when data standards
> are not held to is
> good. I am however sometimes worried that this kind of
> strictness is only proper
> in some very few instances.
>
>
>
>
>
>
>
>
>
>
> --
> Bryan Rasmussen
>
>
>
>
>
>
> -----------------------------------------------------------------
> The xml-dev list is sponsored by XML.org <http://www.xml.org>, an
> initiative of OASIS <http://www.oasis-open.org>
>
> The list archives are at http://lists.xml.org/archives/xml-dev/
>
> To subscribe or unsubscribe from this list use the subscription
> manager: <http://www.oasis-open.org/mlmanage/index.php>
>
>
|