[
Lists Home |
Date Index |
Thread Index
]
Roger L. Costello wrote:
> Approach 1 - Progress via Simplification
> Approach 2 - Progress via Complexification
His note also makes reference to Wolfram's book in which
patterns which have apparently great complexity are discovered to be the
result of applying exceptionally simple algorithms. A common measure of
the entropy of a system is the "number of bits" needed to encode the
algorithm that describes the system...
Take a look at any of the pictures in Wolfram's book and
consider them to be systems... If you don't know the algorithm that
generated the image, you can easily create a very complex and long
algorithm to describe the image. Thus, you might think it had high
entropy. On the other hand, if you have knowledge of the system, you'll
use the simple, short algorithm and decide that the system has low
entropy.
What often happens when people begin to understand apparently
complex systems is that they eventually begin to discover that the
actual entropy of the system is vastly less than the system's apparent
entropy. Of course, once the entropy is discovered to be less than
originally apparent, the newly apparent entropy begins to approximate
more closely the actual entropy -- for those that have the
understanding. Thus, apparent entropy is relative to the observer even
though the actual entropy is independent of viewer. One measure of
relative "understanding" or "knowledge" is the difference between
observers in their perception of the apparent entropy of a system.
To a great extent, the problem of programming is that of
building models of real or conceptual systems. If a model is to be
accurate, then it must have *at least* the same complexity as the system
whose behavior it models. Any additional complexity in the model is
simply the result of a failure to understand the modeled system or the
result of a failure of either the modeling language or the modeler --
but need not necessarily impact the accuracy or utility of the model.
Any model which is less complex then the modeled system will be, must
be, inaccurate (even though it might still be useful for some
purposes...).
In order to arrive at minimally complex models, and thus simple
implementations of what might have originally appeared to be complex
systems, what we need to do is to first carefully describe and otherwise
identify the full behavior of the modeled system. Then, when we have a
full description of the system, we can begin to seek the usually simple
model that describes it. The "Progress via Complexification" typically
results from this cataloging process. i.e. each new bit of complexity in
the model (i.e. the program) results from the discovering of and thus
the cataloging of another aspect of the modeled system. The "Progress
via Simplification" typically comes when the modeler realizes that what
initially appeared to be multiple aspects are really just different
perspectives on a smaller number of aspects. Thus, while "Progress via
Complexification" often results in a model which grows toward being more
complex then the modeled system, "Progress via Simplification" more
accurately aligns the complexity of the model with that of the modeled
system. (Remember, an overly complex model may still be useful -- only
an oversimplified model is guaranteed to be inaccurate.)
It should be recognized that cycling between "Progress via
Complexification" and "Progress via Simplifiation" is, essentially,
inevitable. Complexification comes as a result of discovering the
apparent complexity of systems, simplification is the result of
discovering the actual complexity of the systems. Complexity is the
result of discovery. Simplification is the result of knowledge and
understanding. First, we must discover -- then we can understand.
Ideally, in a software project, one cycles through as many
phases of complexifiction->simplification as possible during the
"design" phase. It is really unfortunate, although often unavoidable,
when these phases involve released code...
bob wyman
|