Lists Home |
Date Index |
From: Thomas B. Passin [mailto:firstname.lastname@example.org]
[Bullard, Claude L (Len]
>> Anyone who thinks we can magically hook up the
>> world's businesses and skip the step of
>> creating the vocabularies missed Markup 101.
And it is a hard job. And if you have multiple
parties (the mythical industry wide schema), you
can get old and fat doing it.
>> On the other hand, I'm still not sure the
>> interface model changes that requirement.
>> I can see it working either way.
>Here's what I've seen. With an API, you have to learn the API AND the
>semantics of the parameters. When the API is complex, that's a big task and
>a lot of inconsistencies can creep in.
The semantics of the document type can also be hard, can drift, and can
be interpreted differently. It often takes several tries and multiple
on the phone and F2F meetings to work that out. Then, if the people
who do understand it don't teach it or explain it well, this process
gets repeated sometimes after a catastrophic failure. The major advantage
is having a tight validator followed by semantic validation. The first
part is simple (kill DTDs at your peril until you are satisfied with
alternatives), but the second part is still code. The first part
depends on how many people you can get in the room to agree and
know and prove they agree. Also, how many organizational entities
(at the company level, not people) must buy into the document and
how frequently will they use it? Are you building a giant schema
or DTD from Hell (eg, 28001), or lots of small ones? At what point
does the cost of medium sized sets of parameters equal small document
I'm not saying you are wrong, just that there can be a lot of expense
and time lost working out a schema that satisfies multiple requirements
for multiple parties and that the schema itself can forge chains on the
business relationships that give nimbler competitors advantages. Losses either way.
>Is I would only have to learn the semantics of the parameters and not also
>have to learn the API mechanics, that would be a large advantage. It would
>save a lot of time and reduce the number of errors and misunderstandings
>(and bugs) to deal with.
It is easier to learn an API with a few calls, each of which does a lot,
or understand how the same set of primitive calls (eg, HTTP) are organized
to achieve the same goal?
>> But that is a
>> business app. It is intelligence, not
>> command and control in real time. For desktop
>> level C2, one really might want RPC and a
>> more tightly coupled system.
>> What do you think?
>I find myself lately developing browser-based applications for my own
>desktop use, when the app needs a GUI, because I don't have to develop all
>the GUI stuff myself one more time, and also because I have the possibility
>of putting it on other computers without installing anything. Even with the
>real disadvantages of a browser interface, many times it still works out
>best, especially for prototypes.
I agree with the proviso that the sites to which I deploy accept the
ubiquity of a single browser implementation. IE wins.
>Another useful approach to desktop apps is to make them fairly modular - or
>built with components - and to think of the communications between the
>components as something you might change or extend later on.
>For example, you might have the components communicate using xml.
Certainly and most of the people I talk to doing this agree tht this
is where XML shines: a data transport.
>I think this answers your question by coming down on the side of loose
>coupling, even if you might end up making use of RPC.