[
Lists Home |
Date Index |
Thread Index
]
- From: "Jeremie Miller" <jeremie@netins.net>
- To: <xml-dev@ic.ac.uk>
- Date: Fri, 23 Jan 1998 11:28:00 -0600
I'm wondering what everyone else thinks about this issue. When a
server-side solution is used to dynamically modify XML content into HTML so
existing browsers can render it appropriately, what happens to a browser
that _can_ deal with the XML or XML + XSL? Or what happens to an
intelligent spider that understands XML? As far as I can tell, right now
nothing happens, they get HTML just like anyone else. But so much is lost
and it nullifies much of the power of XML and the meta information it
contains.
I guess I'd just like to see this discussed before it becomes a problem. I
suppose something simple like this in the head of the HTML page would work:
<LINK REL="Alternate" TYPE="text/xml" HREF="realpage.xml">
But without any kind of consensus on this or official recommendation I doubt
it would get used.
Am I missing something, or has this already been discussed?
Thanks,
Jeremie Miller
jer@jeremie.com
http://www.jeremie.com/
xml-dev: A list for W3C XML Developers. To post, mailto:xml-dev@ic.ac.uk
Archived as: http://www.lists.ic.ac.uk/hypermail/xml-dev/
To (un)subscribe, mailto:majordomo@ic.ac.uk the following message;
(un)subscribe xml-dev
To subscribe to the digests, mailto:majordomo@ic.ac.uk the following message;
subscribe xml-dev-digest
List coordinator, Henry Rzepa (mailto:rzepa@ic.ac.uk)
|