OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

 


 

   RE: [xml-dev] Mailmen, POST, Intent, and Duck Typing

[ Lists Home | Date Index | Thread Index ]

I don't think it weird but I'm not surprised by that.   It is pretty simple.   We use markup over delimited ASCII
because we want to put more semantic hints as to the producer's intent.   If we want stronger hints, we go
to a language like RDF to provide stronger linking among the signs.   If we want to send our intent and ensure
it can't be misinterpreted, we package up the intentions/functions/methods with the data and send that. 
 
So once again, if there is to be a pragmatic layer, and I assume that means something codified in the
program or code that flips the bits on the machines, then other than sharing a philosophy of meaningful
utterances, norms and affordances, how would one communicate those utterances, norms, and affordances?
IOW, what is above semantics?   Pragmatics.  How do we implement pragmatics?  Objects.
 
Other means may be possible but that is a first position.   Even an interpreter for a set of RDF assertions
attempting to evaluate a text requires a functional contextualizer.
 
len
-----Original Message-----
From: adasal [mailto:adam.saltiel@gmail.com]
Sent: Monday, February 27, 2006 5:41 PM
To: xml-dev@lists.xml.org
Subject: Re: [xml-dev] Mailmen, POST, Intent, and Duck Typing

That's weird. It's an inversion of what is expected. But I think what you are referring to is what happens when things are schematised. OO programmers want to know "how to do it", so laying out a schema is considered "helpful". But the issue is whether there is room for the unexpected. This is another way of looking at these issues. Too much risk is just too risky, but no risk is boring. When I read a book I take risks in my interpretation of what I am reading, the risks are meant to expand my interest, in the broadest sense. When I read a book (search through texts) I determine what are the patterns of interest.
But we are all thinking about applications such as text searching in an educational context as just an example, where that connection making and the attendant risk, is delegated to others. For instance the program designer might incorporate an anonymous recommendation system that works on some algorithmic assessment of how the material in question is used, quite possibly something quite similar to what an individual would do in assessing the potential interest of a text anyway (i.e. what are people saying about that book!).
As you say, we don't want all the results to be homogenized. Indeed, we shouldn't want something that has been available previously in a different and, arguably, better form (word of mouth).
There are very definite constraints on how such programs can be designed if they are not just gimmicks. Certainly one axis is that or risk, and the analysis of that must take into account the consequences of delegating that risk. The situation of a human actor can also be usefully contrasted with that of a machine as the evaluation of risk differs from one to the other.
BTW I don't think one should confuse how to do something with what it is that gets done. The purpose and intent of communicative acts may be amenable to codification in a computer program such that signs are passed as symbols. All that has happened, though, is that a communication has taken place within the expected parameters designed around it.
Adam

On 27/02/06, Bullard, Claude L (Len) <len.bullard@intergraph.com> wrote:
Less that that.   The pragmatic layer ovoer the semantic web may simply be the Revenge of the OOPMen (Object-oriented Programmers).
 
If pragmatics as linguistics is about the purpose/intent of speech acts, then in a computer system, a fully-laden purposeful data structure
comes with its own methods.  If signs are just typed arguments passed among functions and passing objects is a means to pass
a purposeful data structure, then Pragmatics On The Web comes down to object-oriented programming on the top of RDF/CG, not
statistical divination or theory of mind.
 
If so, it's a movie we've already seen.  It was good on the big screen and sorta dorky on the little screen, but I'm sure cable will
replay it as often as they can.
 
len
-----Original Message-----
From: adasal [mailto:adam.saltiel@gmail.com]
Sent: Sunday, February 26, 2006 5:59 PM
To: xml-dev@lists.xml.org
Subject: Re: [xml-dev] Mailmen, POST, Intent, and Duck Typing

Len,
this is very interesting. First time I have come across Grice outside of academic, linguistic circles. What I had read of his I always thought it must be applicable to ontology reasoning, but never took the thought further. It is interesting that the Grician contribution is classified as pragmatics, the classification Peirce gave his own logic.
Thanks (all) for this thread.

The fact that "dumb" Bayesian
networks with no semantic formalisms have been much more successful
than expert systems in classifying spam, and therefore much more
useful to real people, is perhaps a beacon in this regard.

There are those who attempt to combine the two (losing the "purity" of both), each node of an ontology tree computed against a statistical algorithm.
But the intriguing thing about statistical analysis is that in some way it is not "dumb", it really is an open question as to how neural type networks map into brain/human social functioning. Stochastic process and models of these processes are often givens in psychological research, i.e. a neural net model may be taken as sufficient to model peripheral processes to that under investigation.
Ontologies are convenient ways of organising information that take some of their convenience from the fact that their structure contains information. But there is no reason to believe that because an ontology can be generated it is a discovery of what already exists, on the contrary, it is an intellectual invention that provides short cuts to implied knowledge in some circumstances. C.S. Peirce demonstrated the logical necessity of the underlying relationships, not particular, specific ontologies.
I think that the issues are not of the complexity of the machine, but the complexity of the user if the user is human. Methods that may work for machine <-> machine negotiation may not work for human <-> machine, pragmatically speaking. I think this is an area for research and clarification.
Adam

On 24/02/06, Bullard, Claude L (Len) <len.bullard@intergraph.com> wrote:
If I make a bet on the cat being dead, does that
alter the probability, the fact, or in any way
change the need to open the box and look?

On the other hand, if I am making a bet on
spam, my risks are lower than the cat betting
that I am going to open the box.

Given the frequency of spam, the occasional
misclassification is a low cost event, strictly
speaking although there is a probability that
I will miss something important.

Pragmatic systems are learning systems.

len


From: Chris Burdess [mailto:d09@hush.ai]

The fact that "dumb" Bayesian
networks with no semantic formalisms have been much more successful
than expert systems in classifying spam, and therefore much more
useful to real people, is perhaps a beacon in this regard.








 

News | XML in Industry | Calendar | XML Registry
Marketplace | Resources | MyXML.org | Sponsors | Privacy Statement

Copyright 2001 XML.org. This site is hosted by OASIS