[
Lists Home |
Date Index |
Thread Index
]
From: Costello,Roger L. [mailto:costello@mitre.org]
>Forrester notes that oftentimes corporate and government leaders make
>decision based upon their own "mental models" of how the world works.
>Unfortunately, those mental models are typically simplistic, and do not
>take into account all of the complex, interrelated forces. As a
>result, they make bad decisions.
Another way to say that is to use the cybernetics definitions: a first
order cybernetic system based on negative feedback is being applied to
a second order system based on positive feedback. This is classic
top-down controls (direct modification of a variable) vs bottom-up
processes (self-organizing and multivariate).
>On the other hand, if those corporate and government leaders were to
>create computer models that took into account the complex, interrelated
>forces then they would be able to make much better decisions.
We've been discussing intelligence on the CG list. I submit for
consideration:
|---> abduct ---> induct ---> deduct ---> publish --->|
|--------------------<------------------------------- |
is a typical cycle and that out of the cycle comes both the
feedback that regulates it and the outputs that initiate
other processes both self-similar (a deeper analytical
function) and spawning neighbor processes (eg, policy formation)).
It comes down to the data they abduct first (interests), the hypotheses
they make (inductive), outcomes they deduct logically, then publish those
conclusions
as 'facts' as feedback into the 'intelligence analysis' process. Those
models are as good as their interests are relevant to the problem to
be solved. GIGO. Computer modeling is only as helpful as they are
willing to endure alternative analyses and potential outcomes. I suggest
you read a paper called "The Psychology of Intelligence Analysis".
http://www.cia.gov/csi/books/19104/
>Forrester's lifelong goal is to "bring enough people across the barrier
>separating their usual, simple, static viewpoint to a more
>comprehensive understanding of dynamic complexity."
Good goal. Again, models won't make you smarter. They augment your
smarts if applied smartly. See works of Douglas Englebart.
>An XML Schema is a model of what data is needed by a system. Just like
>the above corporate and government leaders who make decisions based
>upon their own simple mental model of the world, schema designers make
>schemas based upon their own mental model of the system in which it
>(the schema) will be used.
Lou Burnard boiled it down to, "A DTD is a theory about a document."
In the sense that a theory is also a mental model and used as a control over
the content, and subsequent processing of the content, you are right.
>Leaders fail to take into account the complex interrelationships and
>thus make bad decisions. So too, schema designers fail to take into
>account the complex interrelationships in a system and thus make poor
>schemas.
Unk-unks. Unknown unknowns. Sometimes this is not failure but
deliberate: the beggared solution where the Schema is designed to
force a certain implementation. We like to think we are smart
enough to do that. We might be. We might not.
>The system dynamics approach would be to model the system and its
>information, including all of the interrelationships. From that
>computer model and simulation would emerge indications of what the XML
>Schema should contain.
First, simulation is expensive (we can make it
cheaper and the X3D guys keep trying to get that done)
but that aside...
See above. That is one approach but beware the trap of first order
controls over second order systems. We black box some processes for
good reasons; abducting too much data to make a decision can be worse
than abducting too little. All of the stuff you read here about
"Dare to do less", 'YAGNI', 'lots of little schemas', 'beware the
naming conventions', 'take care with polite inclusion', these are all
different ways of using and/or abusing black boxes. Both extremes
bite. There is no free semantic lunch. We busk our way through and
pray we don't screw it up worse. Even experience isn't a perfect
hedge because the past is not always informative, but it is one of
the better bets.
So the first question you ask yourself is 'am i trying to preserve
a homeostasis/status quo' in which case, you use a first order control
and measure for stability; or 'am i trying to get sweeping changes
through morphogenesis' in which case you design a second order process
and keep thumping it until it works as you wish (it seldom will because
self-organizing systems attain momentum and are marvelously resistant
to outside forces. they become 'strange' and 'estranged') or one
time you get the results you are after accepting that it might
not be repeatable.
>Thus, the system dynamic's approach is to create XML Schemas after
>creating and simulating computer models of the system.
That is one way, and if you look at the tools for
orchestration/choreography,
that is what they do, except they run the simulation on the human processors
in real time. That is why if you are smart, they aren't deep. Or you
mashup a lot of web services and pray the data is good enough and the
mashup isn't applied unwisely.
Simulations are virtual. Instrumenting human processes is expensive.
We can simulate the effects of a plume over a population, of a tsunami
on a coastline, or a Cat 4 on a levee; we can't do much about Mom
running INTO the plume to pick up her kids from school, of tourists
who think the surf is up, or of the effects of concentration
of a culture into a bowl below sealevel. And by the way, it is
a bad thing if the top-level controls are working from draft
documents (THE NRP AND NIMS ARE DRAFTS YOU FRIKKIN' IDIOTS!!!!)
And we pick up the pieces.
The reason for emotional processing dominating the mammal brain is
to evolve a system for rapid summing of inputs to a decision to stand
or flee.
See diagram above.
len
|