OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.

 


Help: OASIS Mailing Lists Help | MarkMail Help

 


 

   [Fwd: Returned mail: see transcript for details]

[ Lists Home | Date Index | Thread Index ]

sorry to put this through the list, but it's the only way to get to
simon.....

your mail server's a bit unfriendly. we run our own mail server on a
permanent ip address attached to our adsl connection from iprimus (in
this case - actually the smtp server is on my linux powered laptop, but
the web attachment this morning is via our iprimus adsl line)

whatever.... it's still an awefully big stick on what i presume is a
spam problem.

rick

--- Begin Message ---
  • To: <rjm@zenucom.com>
  • Subject: Returned mail: see transcript for details
  • From: Mail Delivery Subsystem <MAILER-DAEMON@zenucom.com>
  • Date: Fri, 25 Jul 2003 06:56:58 +1000
  • Auto-submitted: auto-generated (failure)
The original message was received at Fri, 25 Jul 2003 06:56:51 +1000
from localhost.localdomain [127.0.0.1]

   ----- The following addresses had permanent fatal errors -----
<simonstl@simonstl.com>
    (reason: 550 5.0.0 <simonstl@simonstl.com>... We do not accept mail directly from IPrimus hosts. Please use your provider's outbound mail server.)

   ----- Transcript of session follows -----
... while talking to mail.simonstl.com.:
>>> DATA
<<< 550 5.0.0 <simonstl@simonstl.com>... We do not accept mail directly from IPrimus hosts. Please use your provider's outbound mail server.
550 5.1.1 <simonstl@simonstl.com>... User unknown
<<< 503 5.0.0 Need RCPT (recipient)
Reporting-MTA: dns; znote.zenucom.com
Received-From-MTA: DNS; localhost.localdomain
Arrival-Date: Fri, 25 Jul 2003 06:56:51 +1000

Final-Recipient: RFC822; simonstl@simonstl.com
Action: failed
Status: 5.0.0
Remote-MTA: DNS; mail.simonstl.com
Diagnostic-Code: SMTP; 550 5.0.0 <simonstl@simonstl.com>... We do not accept mail directly from IPrimus hosts. Please use your provider's outbound mail server.
Last-Attempt-Date: Fri, 25 Jul 2003 06:56:57 +1000
--- Begin Message ---
when building a semantic database engine, the one thing i realised is
that to be intelligent, it must be able to make mistakes and learn from
them - it learns at this stage from interaction with it's designer (me)
who either builds in new abilites, or accepts that some things will also
be a bit of guess.

ever since we learned about np-complete problems (a long time ago now) -
problems like shortest walk around a graph - we have had to accept that
you can't be right all the time. the fantastic thing about the human
brain is it's ability to cope with the insoluble and the errors in a
situation. the "life goes on" ability.

anyone who has tried building large it systems knows the frustration of
discovering that the so called experts in a business often haven't the
faintest idea of what they're really doing, and they frequently make
significant mistakes. but somehow the whole thing - business and people
- keeps working.

all of which is to say that we have to build learning, and the
acceptance of faults into any semantic or intelligent system. i'd be
very suspicious of any insistence that we can do different. it's in the
same category as perpetual motion machines.

rick

On Fri, 2003-07-25 at 00:03, Simon St.Laurent wrote:
> cowan@mercury.ccil.org (John Cowan) writes:
> >> (b) does the difference have any effect on the behavior of the
> system?
> >
> >Definitely, since "the system" includes human beings and other
> >inference-drawing machines.  
> 
> To me, this is where it gets interesting.  Part of the genius of the
> original Web was that it didn't mind bad URLs - humans were part of the
> system and could deal with the 404 Not Found messages themselves.
> Annoying, but not likely to cause especially complicated problems.
> 
> In the Semantic Web, on the other hand, the URIs are under the covers,
> with no simple "GET it and tell me an answer or give me an error".  "The
> system", as it did for the Web, includes human beings and all their
> interpretive and creative foibles, but not their convenient
> error-handling capabilities - at least not until an awful lot of URIs
> may have been processed.
> 
> The original Web was simple enough that exception handling could bubble
> out to humans and there wouldn't be a huge problem.  The Semantic Web is
> both a lot more complicated and its keepers try very hard to keep humans
> far away, which seems like a seriously dangerous approach to me.

--- End Message ---
--- End Message ---




 

News | XML in Industry | Calendar | XML Registry
Marketplace | Resources | MyXML.org | Sponsors | Privacy Statement

Copyright 2001 XML.org. This site is hosted by OASIS