Lists Home |
Date Index |
From: Bob Wyman [mailto:firstname.lastname@example.org]
Roger L. Costello wrote:
> Approach 1 - Progress via Simplification
> Approach 2 - Progress via Complexification
> His note also makes reference to Wolfram's book in which
>patterns which have apparently great complexity are discovered to be the
>result of applying exceptionally simple algorithms. A common measure of
>the entropy of a system is the "number of bits" needed to encode the
>algorithm that describes the system...
That's Kolmogorov Complexity (an idea that many have contributed to
including Chaitin). Wolfram is just riding a wave.
>On the other hand, if you have knowledge of the system, you'll
>use the simple, short algorithm and decide that the system has low
Acutually, low complexity. Entropy is a measure of disorder and
one should look at Fisher and Shannon information and contrast them.
What is the uncertainty of a random set vs the uncertainty of an
ordered set? What is the overall length of the program? What
is the predictability of the output? This has implications for
XML design particularly with regard to frequency and occurrence.
This has enormous implications for process specifications (think
the orchestration and choreography languages created for web
>Thus, apparent entropy is relative to the observer even
>though the actual entropy is independent of viewer. One measure of
>relative "understanding" or "knowledge" is the difference between
>observers in their perception of the apparent entropy of a system.
Yes but these models are incomplete with the notion of feedback.
It is the one notion required for evolution. Again, if we have
multiple observers, multiple nested processes, and multiple feedback
loops (ports) running at different rates, what is the overall
algorithmic complexity of the system of systems? What can one
do to reduce that without collapsing the system? (Think about
the process engineering methodologies that insist on 'taking
out work' over those that add more controls and measure more
> In order to arrive at minimally complex models, and thus simple
>implementations of what might have originally appeared to be complex
>systems, what we need to do is to first carefully describe and otherwise
>identify the full behavior of the modeled system. Then, when we have a
>full description of the system, we can begin to seek the usually simple
>model that describes it.
That usually doesn't work. In the extreme cases, you run into what
I called in Beyond the Book, the Triple-omni problem: it requires you
to be omniscient, omnipotent and omnipresent. You must know all things
at all time and have all power over them. This doesn't happen in reality.
Real systems are built a piece at a time, glued together, tested, observed,
then refactored. The locales will vary in their discrete vs continuous
aspects of rebuilding. The difference in artificial and natural systems
is the use of episodic memory. This is why Ballard, et al insist on the
primacy of situational controls over purely logical systems. If you
look at the process languages and how instances of them evolve, you
will note that they are situational and now the issue for web services
is determining how feedback is used to continuously or discretely
evolve their definitions (evolution of the control over evolution
of the product).
Once you break this down into message based systems, you arrive at the
concepts of information ecosystems: dynamic and directed evolution
at very large and very small scales.
> Ideally, in a software project, one cycles through as many
>phases of complexifiction->simplification as possible during the
>"design" phase. It is really unfortunate, although often unavoidable,
>when these phases involve released code...
No that is ideal. The customer is an observer and has access to the
feedback ports. The real trick is to create an economic model that
recognizes this without breaking the firm fixed price for the initial
acquisition. It's called maintenance. The smart contract builds this
in as services. It is this evolution toward a services economy that
Gertsner recognized and which has made IBM a major player while Sun
has languished and Microsoft has worked mightily to maintain dominance
over the core technologies. The Microsoft hegemony will crack and
IBM will prosper. Sun will wise up.
But back to frequency and occurrence? What will the effect of
open blogging be on Sun and Microsoft? Is the size of the
company an issue?