[
Lists Home |
Date Index |
Thread Index
]
From: Paul Prescod [mailto:paul@prescod.net]
Bullard, Claude L (Len) wrote:
>...
>
>>Business has friction because business should have a certain amount of
>>friction (e.g. accountants and auditors). Technology that adds
>>integration friction to the social friction is just overhead.
>
> Maybe. Sometimes it is worth slowing a process down, but that
> should be built in and not a side effect. Do remember the Thule
> Greenland incident in 1960. Had the process been running in
> machine time, North America and most of Europe and Asia would
> still be cinders. Humans should stay in the loop.
>I agree. But as you say, it should be built-in and not a side effect of
>poor technology choices. We shouldn't use 14.4 modems to slow down stock
>traders, we should put in explicit breaks and maybe delays in the system
>whcih cannot be "gamed" merely by upgrading the modem.
I agree. OTOH, if a 14.4 modem saves a few million lives, I won't
use that occasion to upgrade it just because something better is
available. I'll look at the case at hand and say, "hmm, this
suggests a brake is needed. Keep using that one until you can
prove to me that a better solution is at hand, ready to install,
on maintenance, etc.".
>> Because setting up a web service takes a certain amount of
>> savvy.
>Not really. Not with the modern tools.
It still takes some training to write and wrap an object,
tools or no tools. The tools take out some monkey work,
but not the programming. And in any case, I'd still
subscribe to a vetted service for mission critical apps.
There really is money available for good service. Too
many out here still think they are operating in the
New eCONomy.
>> .... Almost any idiot can build a web page and plant
>> references to toss google a curve.
>The people who do this seriously study it and have even built businesses
>around it. Do you think that the difference between a URI and an RPC
>call or between the ease of putting a page on IIS vs. using ASP are
>going to slow them down?
Not all of them. But some, and the remainder will be easier to
catch and prosecute. Bailiff, whack their johnsons!
>There are a variety of reasons to dismiss Google as a discovery
>mechanism *in some situations*. But if you are comparing to UDDI then
>trust in the data is not a good reason. Anybody can send random data to
>a UDDI server and in general that data is not vetted. (see
>uddi.microsoft.com). I could register as an arms dealer for all the UDDI
>engine knows or cares.
As deHora pointed out, signatures. Anyway, the point is that Google is
inadequate for this application by the very technology that makes it
successful for other applications. It is the example that is its own
counter-example. It's like getting a hit with a three chord rock song
and trying to do a symphony for the next release. The market says, nope.
>It would take your average Perl-programming system adminstrator fifteen
>minutes to figure out how to game that system. Google is actually much
>harder because Google aggregates and rates information from so many sources.
See above. It only takes a small conspiracy to out that system.
>Sure, good registries may turn out to be pay-based services rather than
>free. That has nothing to do with whether the centralized UDDI RPC model
>is the right one. IMHO, it clearly isn't. The Web Services world needs a
>trusted *search engine* and *registry*, not *repository*. Yahoo is
>actually the best analogy.
Clearly, like the 14.4 modem, it is a technology doing a particular job
at a particular time. I don't like a centralized model either but that
is what Google is. OTW, I agree. We can't take the "Just XML
and Google" articles very seriously. The problem is harder than that
and if I've learned nothing else from the web I've learned to look
at 80/20 solutions with a jaundiced eye given that we eventually
end up piling complexity on them to cover their bunnies the first
time someone takes a backward glance.
len
|