[
Lists Home |
Date Index |
Thread Index
]
Then this shouldn't be about XHTML. It should be about
a better mouse trap. First, just as mental fodder:
http://www.fortune.com/fortune/fastforward/0,15704,661671,00.html
Alan Kay notes that we still have a lot of room to innovate.
1. This should be about a better web browser, not a Spy Vs Spy
among committee members.
2. I do not accept that standardization ALWAYS follows innovation.
The fear of committee-driven innovation is not justified by the
model, but by the competence of the members in all of the dimensions
in which they must negotiate. It can work but it doesn't always
work. Before we abandon it to 'open to implement but proprietary
specifications' let me note that acceptance of standards as a way
of doing business has benefited the growth of the web and the
industry.
3. But it isn't necessary to grandfather a lot of standards
and specifications that don't play together. A leap forward,
a real innovation, might be something altogether different
than what we have, and it might be painful for the content
owners. It might be simpler but it has to be consistent.
4. It might take time. That's ok.
From a perhaps naive perspective, the topical issue comes down to this:
if a resource returned by a URI has <?xml in the document, the
rules for processing it and returning errors should be clear,
simple, and enforced by the browser. If it doesn't have that,
then it is a laissez-faire situation which is what we have now.
I'm ok with that. If the intent of the author is not made
clear, then the rules are up to the developer, but we have
gone on too long letting the developer set all the rules.
As we begin to see more and more portal systems that rely
on web services, the reliability of this approach will fall down
service by service which is why I sent that 3.1 mb screen shot
to the list. A dumb move (should have compressed it) but it
is a simple example of what happens when combinations of filtering
semantically or otherwise combine with non-local control of a
service and globally distributed services. I don't think
building web services for a browser with HTML is a good idea.
I think racing ahead to implement semantic web systems
without working out the problems of reliability are a
recipe for god-awful embarrassment in the minor cases
and election-stuttering disasters in the major ones.
This comes down to a competition among companies who
accept the challenge of improving on the status quo and
accept the risks of a long term marketing campaign to
get a better technology fielded. It is time to drop
the MicroPhobia, drop the 'we have this specification;
let's form a consortium' game, and work the problem of
building a better and more reliable web authoring and
delivery system. I don't know what that will be.
That's why it is innovation and not just reinvention of
ideas proposed ten years ago.
len
From: Joshua Allen [mailto:joshuaa@microsoft.com]
> > XHTML vastly simplifies machine processing for all sorts of
> > purposes.
>
> Yep, this is the only concrete benefit of XHTML I've seen. It makes it
> easier for people to screen scrape your site. I find this to be a very
> dubious benefit at best.
>
Yeah, it's shows the narcissism of most developers. "Please rewrite
your page in some buzzword-compliant gobbledy-gook subset of XML that is
less reliable and harder to test than what you were already doing, and
then I can theoretically write a screenscraper".
For people who really want repurposable data; we already have capability
to do XML+XSLT+CSS. My RSS feed and OPML feed are both pure XML (no
XHTML crap) and render nicely in IE and Mozilla. XHTML is a
Frankenstein.
|