[
Lists Home |
Date Index |
Thread Index
]
Then the interesting development would be to use
the RDF/ontology systems to inform the tagging
systems by inspection. The problem of the tree
model is even if there is a wildcard, that just
means 'anything goes' and the user either uses
one of the safe options (a contained element
or attribute) or makes up one for the wildcard
slot. An ontological system should be
able to 'know' that the topic is munitions or
flight controls and have a consistent if finite
set of assertions for that topic even if the
human doing the tagging doesn't.
That doesn't solve the completeness problem
but nothing does. Some element of danger
remains. Now the problem is temporal awareness
or context of application: is it ever possible
that a person is under the aileron or in front
of the engine and can one design a repair depot
where that doesn't happen? Again, it isn't
the machine that is dangerous; it is the
environment. Most tagging dilemmas come down
to engineering the environment, that is,
meta-controlling it (which is also a self-limiting
solution but ok).
That is why street diggers put out traffic cones.
They don't keep someone from driving into the hole,
but they keep them from winning a lawsuit after they
dig out.
len
From: Ari Nordstrom [mailto:mayfair@tiscali.se]
The reason why the (mis-)tagging is a PARA and not a whole new tag,
invented by an adventurous author, is simply that the system where the
mistake was made requires validation. If validation wasn't required I'm
pretty sure there would be a new tag instead. If you know people do this
kind of thing, you want to remove as many possible mistakes as possible.
It's a very good reason for validation, and enough motivation for a number
of "mission-critical" systems, from airplane documentation to armed forces
field instructions.
See, PARA is bad enough, but it won't lose the information. A new tag just
might, in some context.
>Even more fundamentally, the real problem here is the necessity of the
>warning in the first place. Most properly designed systems (munitions may
>be an exception) should not be able to kill people. There should be
>nothing in my toaster, computer, or microwave oven that can injure me
>short of dropping it on my head from a high building. This should be true
>regardless of what the manual says.
The _system_ doesn't kill anyone, but the things the system is used to
describe just might do that. Both of my examples above deal with
information of that nature.
|