Lists Home |
Date Index |
- To: email@example.com
- Subject: Re: [xml-dev] Statistical vs "semantic web" approaches to making senseof the Net
- From: Mike Champion <firstname.lastname@example.org>
- Date: Thu, 24 Apr 2003 19:57:08 -0400
- In-reply-to: <EFCE3FB1E4D1114A8F13841C4BBBB541914AB2@MAIL-04VS.atlarge.net>
- References: <EFCE3FB1E4D1114A8F13841C4BBBB541914AB2@MAIL-04VS.atlarge.net>
- User-agent: Opera7.10/Win32 M2 build 2840
On Thu, 24 Apr 2003 07:38:27 -0500, James Governor <email@example.com>
> I hope the above rant doesn't seem too tangential, but I really wanted
> to at least point to an alternative conception of "meaning" and how it
> related to computers. Computers are not good at making subjective
> judgements and or semantic distinctions, but they are (the web is)
> perhaps the only way to track the subjective judgements of vast numbers
> of people---from this analysis meaning and semantics can arise.
Thanks ... very interesting points. FWIW, I would accomodate a Genetic
Algorithm-ish approach that extracts "meaningful" information by its actual
utility in practice in the general category of things I labelled
"statistical" although perhaps that is stretching a bit.
> The relationships
> between resources, elements and people, are where we find meaning.
> Computers are very useful tools for monitoring these interactions.
> Obviously this is a form of statistical analysis, but I figured it was
> relevant and differentated enough to be worth mentioning
Yes! And the reminder of Steven Johnson's stuff is appreciated ... when he
wrote, there were few weblogs, trackback, pingback, and all that stuff that
"closes the feedback loop" that he noted to be missing on the Web. Some
way of harvesting all this stuff back into XML tags or semantic metadata
(e.g. "title" is probably used in its legal sense in a community of real
estate agents, its publishing sense in a community of authors, its
honorific sense in a community of aristocracy buffs ...) would be very