Lists Home |
Date Index |
Miles Sabin wrote:
>>Probably none. But I wouldn't define an executable transformation in
>>an XSLT program when I could define a declarative mapping in a single
>>line of OWL.
> To a certain extent this is missing the point.
> I don't think anyone would say that using XSLT transforms to map between
> vocabularies is in any interesting sense a SW technology. And yet it
> seems to be enough to cover one of the primary use-cases for OWL.
Yes, if you don't care about reliability, performance, security or using
specs as they were designed to be used, then XSLT can be used to handle
some carefully chosen subset of OWL's use-cases.
>>First, the XSLT is Turing complete. That means that if you send an
>>agent onto the Web to find an XSLT that translates ebXML into
>>BizTalk, you have no idea what computational resources you are
>>committing to run the transformation it finds. You can't even kill
>>the process with confidence when it runs in an "unreasonable amount
>>of time" because perhaps it was just one second away from completing
>>the task. So you set some arbitrary runtime limit and take your
> True ... but is it all that surprising? Is it really very different from
> hitting a browsers stop button when a web site seems sluggish? If you'd
> hung on for another couple of seconds you might have got a response.
> Isn't this just the way that the web works?
No. The remote service is using only one resource on my computer, a
socket. A downloaded XSLT running on my computer is using MY memory
which makes me very vulnerable to DOS.
>>I *believe* that the computational properties of semantic
>>web reasoners are more predictable. But I am not a computational
>>logician so I could be wrong on that.
> I don't believe that any logic that's likely to be interesting in this
> kind of context is decidable.
Why not? What evidence do you have that OWL is not in general decidable?
> ... Let's face it, your semantic web reasoner
> is going to have to do the same job as the transform, so how could it
> be guaranteed to terminate in finite time if the transform can't be?
You're the one who suggested using a whole Turing-complete transform to
do simple property equivalence! By analogy, I could say that my Cavalier
is quite acceptable to get me to work. You would propose instead that an
18-wheeler would do the same job. I would say, no, the Cavalier is
easier ot park. You would respond: "yeah, but if you're going to be able
to carry the stuff that an 18-wheeler can carry, you're going to need a
big parking spart anyhow."
Well, yeah. But OWL isn't Prolog. If I want Prolog I'll use Prolog. Then
we can compare Prolog to XSLT.
>>Also, there are issues around security and analysis of
>>Turing-complete programs versus declarative specifciations. And
>>serious optimization issues! So overall, I see the OWL route as being
>>more reliable, secure and performant.
> XSLT can be compiled to Java (amongst other things), and you can prove
> the security properties of Java programs ... that's what the bytecode
> verifier does.
First, those verifiers are typically buggy as hell because they are
brutal to implement. Second, you invariably run into the Turing tarpit
when trying to detect DOS.
> ... As things stand that doesn't address resource
> consumption issues, but you can expect some developments on that front
> in the not too distant future.
Are they going to revoke undecidability?
> ... Obviously all this is modulo
> implementation issues ... but I don't see why OWL-consuming reasoners
> should be any less prone to reliability, security and performance
> issues than any other piece of software.
Perhaps because they are _much_ simpler???
>>Second, the XSLT specification has weak support for handing off from
>>one XSLT to another on a very granular (per-element) basis. Consider:
> Granted ... but I only said: eg. XSLT.
So now you're comparing OWL to an idealized language that you can't
> Sure, but I don't think there's anything which in principle prevents a
> sequence of XSLT transforms from being composed into a single
Turing completeness will in general prevent you from detecting when that
is possible or not.
> And again, I only said: eg. XSLT. It might well be that XSLT is too hard
> to compose effectively, in which case we need to find a better model.
> These things are only automata, after all ... and composing automata
> isn't exactly virgin territory.
Composing some types of automata is easy. Composing others is hard.
There is no general rule.
> Perhaps ... but the point I'm trying to make is that we don't
> necessarily need a "semantic mapping language" in the SW sense. My
> guess is that in many, perhaps most, cases all we need are dumb syntax
> to syntax transforms.
That was the idea behind architectural forms. They have been a failure
in the market place of ideas. I've been considering non-Turing complete
syntax to syntax transform languages as replacements for architectural
forms around 1998. I've spent more brain power trying to develop these
and evaluate potential solutions than I care to mention.
If you want to have a go at it, have fun. Personally, I give up.
I'm happy to see that somebody has done it at the semantic level and I
see no reason to prefer syntactic transforms to semantic mappings. I
suppose XML people have a natural aversion to the semantic layer but I
for one am willing to give it a chance.
The more information you give the computer about what you are trying to
accomplish, the more it can help you. OWL lets me tell the computer a
bunch of things I know about a vocabulary and let it decide when
particular bits of information become relevant. There is a rich body of
knowledge around how to do this using semantic languages and I don't see
why I should cut myself off from them out of an instinctual fear of
anything that looks like AI.