[
Lists Home |
Date Index |
Thread Index
]
- From: "Bullard, Claude L (Len)" <clbullar@ingr.com>
- To: Joshua Allen <joshuaa@microsoft.com>, Sam Hunting <sam_hunting@yahoo.com>,xml-dev@lists.xml.org
- Date: Wed, 01 Nov 2000 10:15:44 -0600
<warning>Going long.</warning>
Right. Thank you, Joshua. I'll take a look at the URL.
Meanwhile, here are some aging thoughts on trust and
the maintenance of stable cooperating systems.
***************************************************
Encapsulation of view means that updates are
only occuring in the locale of interest. There are
several techniques for achieving this well-described
in the virtual reality literature. Proximate or
scoped addressing is part of that, there is a lot
of literature on clustering, applying imaginary
numbers, etc. One can go pretty far over the edge
with this one, but the essential is the view dimension
as described in complex systems theory that enables
level of detail based on distance to viewed object.
A wrapped string is a point until viewed closely,
a ball then, and closer than that, a wrapped cylinder,
and so forth. Map paths have the same characteristics
, eg interstates down to local roads, to finite addresses,
so what we are describing is granularity of address
resolution and why in Hytime, they expended so much
effort on identifying address types independent of
system and the concept of location ladders which
can chain these. All of these come under resolution.
People spent a lot of time looking at fractals but
really, fractals are a kind of illusion, a path
produced by feedback looping and thumping a control.
Fractal dimensions and view dimensions are the same,
but note that in process design, it is just a GUI.
As to trust, this issue has been debated again and
again over the years as former F2F processes became
automated and distributed. HTML scared the hell out
of some of us not because it was bad, but because
as a primitive tag stacker, the essentials of validation
were left behind and knowing that it was gencoding,
the first stage of markup system evolution, there
was the terrifying sense of abandoning hard learned
lessons with regards to "trust but validate". I considered
it a bad precedent: colonization at the cost of the
health of the information being encoded. Then
came the XMLIsOnlyWellFormed nonsense, and the panic
became palpable. Finding out that what was really
in the works was replacing DTDs with instance syntax
was a kind of relief but also a warning to take any
signals from the W3C about futures with more than
a grain of salt. That is why we are shaking out
the Semantic Web concepts like a dog with a sock in
his teeth. The XML development process tore apart
the fabric of trust in some parts of the community
and we have been a while getting that back.
Creating XML was necessary but we would be remiss
if we did not look at the process and ask if we
can do better. I think the results of lessons learned
are in Jon Bosak's exemplary process design for OASIS.
It comes down to process, testing process, scoping
control, and predictable behavior. In Beyond The
Book Metaphor, I took up the issues of trust, cooperation,
and system-destabilization in some detail. It is
obvious that distributed systems and destabilization
are issues we have to pay attention to given the nature
of the web as an amplifier, and therefore, the
nasty potentials of cascades due to feedback. SGMLers
recognized the problems of semantics long ago, experimented with
the semantic nets and came back to document flow as
the best level for coordinating distributed processes
particularly, the human levels. Policy, protocol,
and precise definitions of the documents plus a discipline
with respect to the execution of process enables
humans to identify emerging discontinuities. It builds
a kind of intuition into the practice very similar
to the damping of chaotic systems (operate close
to the edge where the processes are hot, and lightly
thump the control to prevent systems from settling
into local non-optimum minima - see annealing).
There is no magic mantra for trust. The best systems
use tit-for-tat strategies. This strategy is simple
and tends over time to converge on optimum solutions.
System destabilization is not a complex notion. Humans always
must be enabled to "pull the plug" in response to
unknown-unknowns which trigger actions with potentially
catastrophic results (See NORAD - 1960 Thule incident and
October 19 1987 stock crash).
Still, while we are here, let's talk about your hacker
problems as a destabilization issue. The goal of destabilization
is to exhaust the energy budget of a system and deprive it
of the capacity to meet mission goals. One can say a
destabilized system exhibits a "higher temperature", thus,
an increase in energy expenditure without a resultant
increase in organization, until it reaches entropy. Direct
attack is one means (eg, a worm), but more subtle approaches
are possible. Some working definitions:
o Instability - the sensitivity of a system element to
variance. The number of sensitive elements and the degree
of sensitivity determine the overall system vulnerability.
o Destabilization - the process of increasing the entropic
value of a system by introducing false referents or relationships
that increase the latency of the messaging system beyond the
tolerance thresholds of the protocol.
A successful destabilization strategy disrupts the synergy of
system and organization. The more interdependent the system,
typically, the easier it is to destabilize. To make the
system less vulnerable, it needs to be noise-tolerant and
we all understand the most common techiques using redundant
data storage, matching and verification, and encapsulation
of components or view dimensionality to restrict propagation.
It is necessary to be able to discriminate natural activity
that results in decay (incompetence in functions, superstitious
learning, etc) from an active destabilizing agent (goal seeking).
Note your own problems with detecting account
creation and discrimination based on seeking higher levels
of privilege. The obvious pattern was goal seeking.
Destabilization in a system can be increased by decreasing
the referential value of a pointer. This
activity seeks to increase uncertainty and decrease confidence
or goodness in a value. These might be called Boltzman Attacks
based on application of the Boltzman entropy equation:
o Uncertainty - increase the number of imprecise terms or referents
that result in unresolved ambiguities. Superstitious learning is a
good example. (aka, FUD).
o Exhaustion - increase the number of referents precise or otherwise
beyond the capacity of the system to resolve them within the budget
(eg time, money, any other finite resource). Vaporware is a good
example as it disrupts timing.
Disrupting timing is an excellent strategy. See Miyamoto Musashi -
The Book of Five Rings - "You win in battle by knowing the enemy's timing,
and thus using a timing which the enemy does not expect." He
goes on to describe foreground and background timing and the need
to see both in relationship to each other. Musicians understand
this as syncopation and the effects of it on autonomic systems.
Some factors that affect destabilization are:
o position of destabilizing agent in hierarchy of control, that is,
the interdimensional effectiveness for propagating by force
o Length of time of effective destabilization, how long is the
error undetected and therefore, the density of the error
(eg, replication)
Destabilization can propagage linearly, by value, or non-linearly
by reference.
To destabilize:
o Identify a mission critical component and its importance in
the event stream
o Introduce the destabilizing agent with sufficient resources
to execute a change needed to redefine a component or critical
element of a component. Reclassification is an excellent
strategy here. AKA, labeling. This is why authority is so
problematic when creating semantic nets. Note carefully,
the principle of rationality is weak for organizing human
systems (see Prisoner's Dilemma). No system can be predicated
on self-sacrifice that leads to extinction. Trust in an organizaton
is in direct proportion to the relationship to self-preservation.
If it helps, it is supported. If it extinguishes, it is attacked.
o Redirect resources so that stabilizing controls are decreased,
eg, distraction. For example, a change of focus can be used
to mask destabilizing activities. When the hacker better understands
your resources and how you apply them, he can create other activities
to deny visibility of his real mission. Coordinated attacks are
hard to defend against if such knowledge is available.
o Protect the agent until the energy budget collapses such that
effective mission closure cannot be achieved by redirection.
Deny the capacity to remediate.
The notion of focus involves temporal elements of concurrency.
What can be known, when and with what degree of certainty grows
or diminishes in relation to the available referents and the
capacity of the system to resolve them.
To counter instability:
o Identify the noise background. Difficult if the hacker
can hide in the noise.
o Regulate and test any interdimensional relationship or
signal. Precisely identify extra-domain relationships.
o Design such that system uses the smallest number of terms.
As Dr Goldfarb says, conserve nouns, and I say, test verbs.
o Ensure terms with a large referent set are carefully monitored
when applied. Rigorously QA broadcast deliverables by policy.
o Structure terms into strongly bound classes
o Collect performance data to identify emerging instabilities. Compare
local events and environment continuously (use current maps and
keep them current).
o Isolate inherently unstable components or processes from
the network. Unstable processes are often useful particularly
as they operate near the edge of onset of chaos, and therefore,
are engines of evolution. "...crazy but we need the eggs."
o Design system to maximize opportunism and cooperation among
dependent subsystems. If a system is becoming baroque, it
is in need of redesign. If the slightest deviation is a cause
of controversy, you probably have a system that is overly sensitive.
Note this is an issue for many object-oriented systems that use
inheritance.
o Avoid intrigue as a means to administer policy. The thing to
know about Machiavelli is, he was fired. Do not make an employee
bet their badge as the price of innovation. Don't white pig. If
the price of innovation is to watch others get the reward for it, the
behavior will be extinguished.
As some extra reading, the Taguchi Model for process evolution and
Deming's TQA work are worthy. As in all things, over applied, they
are also a good way to exhaust an organization. Beware the problem
of top-heavy control systems. In most business transactions, if
the customer is satisfied, you are done. They'll call you if they
need you. Make sure they know you will respond when they call.
Len
http://www.mp3.com/LenBullard
Ekam sat.h, Vipraah bahudhaa vadanti.
Daamyata. Datta. Dayadhvam.h
-----Original Message-----
From: Joshua Allen [mailto:joshuaa@microsoft.com]
To me, the trust issue is the one with which we have the least experience as
an industry. This is also going to be the most important cahllenge for us
to solve long-term. One interesting project is at
http://research.microsoft.com/sn/Farsite/.
P.S. We tolerate inconsistencies in the real world all the time. To quote
from a favorite play by Pirandello, "Oh sir, you know well that life is full
of infinite absurdities, which, strangely enough, do not even need to appear
plausible, since they are true."
|