Lists Home |
Date Index |
Interesting thread, but I don't fully understand the substance of the
(1) A browser blindly displays the HTML on a screen, i.e., there is no
semantic processing of the data.
(2) A human does the semantic processing of what's displayed on the
I don't know what semantic processing is (it sounds by definition,
something that humans do). But those who think browsers don't do
/significant/ processing should consider what implementing
HTML+Stylesheet rendering might involve, compared to say, stuffing an
update into a database.
In the case of machine processing of HTML then HTML-REST is on the
order of N^2, i.e., each machine must have n customized programs to
process the n HTML web sites. ]]
Can you restate this? I'm probably not semantically processing that
statement properly. What does 'process the n HTML web sites' mean to you
The HTTP methods (GET, POST, PUT, etc) provide a nice set of **access
methods** for getting and putting the XML data. Thus,
**accessibility** is scalable as it is on the order of linear.
I therefore conclude that XML-REST is no more scalable than SOAP. ]]
I could only conclude from your argument that XML-REST is no more
scalable than HTML-REST. And you didn't mention SOAP until then.
I think you're saying that XML processing is no more scalable than HTML
screen scraping; that has something to do with humans doing a lot of the
interpretive donkey work and the fact that the receiver will have to
have some code to munge the incoming XML from a site. But your line of
argument isn't gelling. Perhaps that's because I don't understand what
you mean by 'custom code' and there's more than one meaning of 'process'
1) the level of work required to programmatically scrape HTML web sites
versus programmatically process XML ones needs to be factored in,
semantics aside, them's apples and oranges;
2) the assumption that XML-REST data won't promptly end up in front of a
human and thus isn't like REST-HTML data is questionable, a lot of this
stuff will be delivered into devices as a flexible replacement for HTML;
3) you're assuming n modules will be required to process n web sites; is
that a reasonable assumption? (consider this as a data point; we haven't
written n servers to build n web sites);
4) we're more likely to see n processors where that n are specializing
in a task (see Sean McGrath's material on XPipe or the Pipeline spec,
this type of distributed processor has been called specialist parallel)
that can be an ensemble, or will pass the work nearer a suitable
processor. Some redundancy will surely occur (NIH or because programmers
like rewriting stuff, or more likely because they can't find what they
need) but I suspect that will result in nothing like the order, or
complexity suggested (where complexity means processes are necessarily
tightly bound to data).
Bill de hÓra