[
Lists Home |
Date Index |
Thread Index
]
Roger--
Some more extensive comments.
--Frank
Costello, Roger L. wrote:
> *Semantic Web for the Masses, by the Masses*
> * Roger L. Costello*
>
> 1. To enriched the Web with semantics will require everyone pitch in to
> add descriptions (semantics) to individual Web documents.
OK so far.
>
> 1.1 Semantics will not be added by semantic gurus, but, rather, by the
> common users.
>
> Example: A person (a common user) takes a JPG photo of
> a coastline, and then annotates it with this description:
>
> "This is a picture of the New England coastline."
>
Again, OK so far. At lot of this can be done now, can't it? For
example, if the photo appears in the Web page, you can put this
information in as a caption.
> 2. The barrier to entry must be low. That is, the barrier to a common
> user adding a description (i.e., semantics) to a Web document must be low.
>
> 2.1 Complex ontology languages such as RDF and OWL are out of reach for
> all but the semantic gurus, and are thus not used. Even "vanilla XML"
> is out of reach for the common user, and is thus not used.
Referring to RDF as a "complex ontology language" is kind of FUD. Fine,
imagine you take out RDF's URIs. Then it's far from complicated to
allow common users to add simple attribute/value pairs (comparable to
RDF statements) as metadata describing such content. This is
essentially the basis of Adobe's XMP, and Google Base.
>
> 2.2 A Web document is enriched with semantics by the common user simply
> writing a description, in a natural language such as English (see
> above for an example of a description).
This is one kind of "bottom up" semantics addition. It's just starting
from a slightly different "bottom" than conventional Semantic Web
notions. You also have to look at some of the other elements that are
going to be involved. The Semantic Web is about software accessing this
content. So you need some software that will interpret these kinds of
simple natural language statements (as you note later on). An obvious
internal way of interpreting them is as one or more simple statements
ala RDF (as you also note later on). The conventional Semantic Web
starts with users adding these statements directly, rather than assuming
there will be a natural language interpreter of them. How different
really is what you propose than some simple front-end that allows a user
to annotate content with simple attribute/value pairs?
Also, so far you don't describe any way for the user to indicate or
refer to the meanings (or descriptions) of terms used in the natural
language descriptions (e.g., what does this user mean by "picture"), and
whether those meanings are the same as, or different from, other users'
usage of the same terms. Of course, you can do *that* in natural
language too, but remember that the idea is for software, not people, to
be able to access the semantics. (If you assume that the interpreting
software is capable of interpreting arbitrarily complicated natural
language, then presumably if the content is largely textual, ala much of
the content on the current Web, then a lot of the meaning can be
extracted without much if any extra annotation).
>
> 3. The semantic web must be self-regulating.
>
> 3.1 A description that is written by one common user may be edited by
> another common user. Presumably the later common user has more
> knowledge and is thus able to correct or add to the description.
>
> Example. A second person with further information edits the above
> description:
>
> "This is a picture of the New England coastline, near
> the Boston harbor."
>
> 3.2 Common users regulate themselves - they ensure that all descriptions
> of a Web document are consistent.
As noted by others, there are some issues here related to what things
like "self-regulating" mean. Also, it seems to me that assuming that
users can edit a description written by another user unnecessarily
bundles things. Why not instead assume, as the conventional Semantic
Web does, that users simply separately add their own statements about
the resource (the picture, in this case), without changing the original
description? These added statements may be additions ("near the Boston
Harbor") or corrections/contradictions ("it isn't the New England
coastline, it's the Irish coastline"), as well as statements describing
the source or trustworthiness of either the information, or the user
providing it ("I know it's the Irish coastline, because I took the
picture in the first place; and by the way, you didn't get permission
to use that picture!"). Other users then decide for themselves which
combinations of statements are useful to them, and which statements they
want to trust (and how much). There's no real need for consistency in
the sense that the combination of all statements posted are consistent
(and a given user can decide how much consistency she/he needs anyway),
and this is "self-regulating" not in some global sense that *the Web*
does the regulation, but individual users (or software acting for them)
do the regulation simply in deciding what information they want to use.
>
> 4. The tool used by the common user to annotate a Web document with a
> description (semantics) must be lightweight.
>
> 4.1 A simple text box with a basic editor and versioning will suffice.
>
> 5. Advanced semantic machine processing are services provided by a
> limited set of companies which employ Ph.D semantic gurus.
>
> 5.1 Company XYZ is one of those limited set of companies. It employs
> Ph.D semantic gurus. They write advanced code to process all the
> descriptions written by the common users. They use RDF and OWL, if they
> desire.
But if this advanced code is what is interpreting the descriptions, how
much of the semantics is determined by what the users add as
descriptions, and how much is determined by how the semantic gurus
decide that the advanced code will interpret them as meaning?
>
> Comments? /Roger
|