[
Lists Home |
Date Index |
Thread Index
]
It is not new thinking. If your windshield wipers aren't working, should
your car refuse to start? If your brakes aren't working, should your car
refuse to start? Should it refuse to let you put it in gear? Who decides?
Logistics analysis takes up the notion that not all failures are mission
critical. A systems check on startup accounts for that and issues severity
errors. The automated system responds according to the severity. A media
system is no different from any other real time system except in
consequences
of levels of errors. Now it is an issue of where in the food chain a
designer
can specify a consequence; that is, which objects can the validator detect,
which objects can initiate shutdown. A validator is not the right place to
make decisions. It is the right place to detect errors.
Media are not passive nor general. As browsers quit being browsers and
became control wrappers, that was sometimes overlooked. A theory of
hypermedia
is seldom a practical theory of networked components.
len
From: Bryan Rasmussen [mailto:bry@itnisk.com]
There is an assumption one often encounters in implementations for media (as
a
reference media I will focus on hypermedia in the modern browser), this
assumption is opposition to a general assumption for validation of data for
media, the media implementation assumption could be put as follows:
The absence of an object does not cause the failure of the whole. This
means
that as a general rule if I refer to some object that the browser cannot
find
the browser is not designed to fail, the browser assumes that other objects
that
it can find are still useful to the user and do not present a faulty
instance
to
the user (sometimes of course the browser does fail but such failures at
missing
components seem always to be due to bugs in the browser and not required
presence)
As an example of this assumption - a reference to an image that the browser
cannot resolve, this is generally the same behavior in printing etc.
I am in total aggreement with this assumption.
The assumption for validation of data for media is often as follows:
strict requirements for structure prevents failures in your media
presentation.
But of course that someone has put in an element referring to an image does
not
mean the image is placed in the page.
In a way we can define the components of a media instance as being loosely
coupled. How though has it come to pass that this is so? Is there any theory
out
there or do people have theories? I suppose the pedestrian reason is that
media
itself is dataless and any media format must allow decoupling of individual
media elements because we cannot know what their meaning is without the data
context. so that if one had a true xml browser that was semantically aware
we
would be able to crash whenever a document without a required image was
enquired.
I am of course aware of the oodles of theory on strict validation of
document
structures and so forth and why failure when data standards are not held to
is
good. I am however sometimes worried that this kind of strictness is only
proper
in some very few instances.
|