[
Lists Home |
Date Index |
Thread Index
]
From: Roger L. Costello [mailto:costello@mitre.org]
>I think it is clear that except for trivial, academic cases RDF Schema
>and OWL do not have the robustness to capture the dynamically changing
>nature of real-world semantics. To do so, we must go beyond these
>ontology languages.
Because ontologies do not capture. They **distinguish**.
The intelligent observer captures. One reason to create
automatons for this (to answer Nicholas), is to make
automated agents behave intelligently so that they can
act as representatives of their owners, for example,
intelligent search 'droids that discover and distinguish
resources, then negotiate with them to create a proposal
based on the results of the negotiation that can then
be presented to the human owner. This is doable but
a better understanding of negotiation requires the
understanding of domain creation and domain evolution
such that the agent can identify the current domain.
In short, it manages contexts on behalf of the user
and that is a way of saying 'creates and evolves
schemas'. Already, tools such as Visual Studio
produce schemas on-demand.
Ontololgies model. A known means of modeling
what you are working on is situational cybernetics using
first and second order cybernetic systems. The vital
knowledge is that in a dynamic situation, controls are
emergent or evolve by negotiation.
There is nothing spooky about complex systems,
or really, distinctly new. Pipelined systems
with negotiated controls are the common approach
because of the need for a formal system that
enables scope of control (where scope has dimensions
such as temporal, spatial, conceptual class, and so
on).
Formal systems require closure and therefore, never
accurately model 'reality', and that is a 'so but
so what' statement. Tools is tools. The Golem
problem is solved by limiting the authority of
the ontology and its operational means. For your
work, consider a system that provides a means to
parameterize an XSLT that modifies the document
be it a chain of XSLTs that modify aspects of a
schema or themselves. Even if complex,
it is finite (closes).
The work on upper level distinctions that can then be
used for organizing the classifications efficiently
and *distinctly* is the tough work. When you say
"beyond ontology", you are repeating an age-old
pattern in computer science: shifting the focus
from data to process. To understand the world
'as it is', one sees 'being', 'thinging', 'becoming',
and so on, the dance of lila, the essence of samsara.
But to be in that world, one uses energy to change it,
one 'interacts'. As I said, at the end of your
investigation, you should have a much greater appreciation
for data-driven applications, paticularly, data-driven
GUIs.
Where is the source for parameters to XSLT scripts?
That is the environment interface control.
John Sowa makes some important points about ontologies
in his work and recent discussions on the CG list:
"The upper ontology should be
organized around fundamental distinctions, from which
the various categories and their placement in the
hierarchy can be derived automatically.
One of the distinctions is between physical and
abstract. That distinction covers representations
as well as possible individuals that don't really
exist, such as unicorns.
All types are abstract, whether types of lions or
types of unicorns. All actual lions are physical,
and any instance of Unicorn, if it were to exist,
would also be physical. Similarly, the information
in all representations is abstract, and the encoding
in any medium is physical.
That is an example of how one distinction, high up
in the hierarchy, has implications that ripple down
throughout the ontology. Unless the distinctions
are located at the proper level, redundant copies
of similar but subtly different axioms must be
repeated in many disparate sections of the hierarchy.
And that brings us to another point that I have been
making again and again: we should not start by drawing
diagrams, but by listing the relevant distinctions.
Then at every stage of ontology development, we can
push a button to cause some algorithm, such as FCA,
to create a hierarchy of categories that shows where
each of them is located in terms of the defining
distinctions." - J.F. Sowa
Back to topic. A known approach to modeling and
evolving system is semiotics.
One could model a semiotic processor using XML, XSLT,
and the parameterising program as the interface to
the environment. One gets a recursive process model.
Something from my HumanML notes:
"If Peirce's ideas that the universe is a sign producer are applied,
then in effect, the model includes the environment as well as all
semiotes within it. Schematically:
|------------------------------------------------------|
|Semiote: Global environment |
|------------------------------------------------------|
| |
| +----------------------------+ |
| | | |
| v | |
| |----------------------------| | |
| |Semiote: local environment | | |
| sign |----------------------------| | |
|---+-->| sign ->--+---+ | |
| | | Semiote: member | | | | |
| | | Semiote: member | | | | |
| | |----------------------------| | | | |
| | | | | |
| +-------------------------------------+ | | |
| | | |
| +-------------------------+ | |
| | | |
| |-----------v----------------| | |
| |Semiote: local environment | | |
| sign |----------------------------| | |
|---+-->| sign ->--+--------+ |
| | | Semiote: member | | |
| | | Semiote: member | | |
| | |----------------------------| | |
| | | |
| +-------------------------------------+ |
| |
|-------------------------------------------------------
We design a class structure for that which can process the
structured messages (signs) that are provided in the form
of an XML instance. This would incorporate XML technology
such as DOM and possibly XSLT as components of the sign processor
which itself may be an XML pipeline augmented with a rule
processor such as a Schematron engine."
len
|