[Date Prev]
| [Thread Prev]
| [Thread Next]
| [Date Next]
--
[Date Index]
| [Thread Index]
Re: [xml-dev] RE: James Clark: XML versus the Web
- From: Ben Trafford <ben@prodigal.ca>
- To: Kurt Cagle <kurt.cagle@gmail.com>
- Date: Wed, 01 Dec 2010 20:48:34 -0800
I don't disagree with any Kurt has said, but would like to repeat a
suggestion I've made elsewhere:
One of the big topics people seem to be avoiding is "Why does HTML still
promote native behaviors?" A lot of the issues with XML in the browser
simply go away if we can move HTML native behaviors to CSS.
For example: I can't declare a link in a browser in anything other than
HTML. Let's say I have a document...
<document>
<link uri="http://www.prodigal.ca">This is Ben's webpage.</link>
</document>
Why could I not have a CSS stylesheet that looks like this?
link {
link-type: simple;
link-href: attr('uri');
}
This only addresses the question of links, but as we all know, there are
a host of behaviors in HTML that have never been disambiguated from the
markup.
We can address all the issues that have cropped as a result of
over-standardization, XML technologies that have matured at differing
paces, etc. Believe me, I would have -loved- to have Relax-NG back
twelve years ago when were thinking about XLink. I would've advocated
for making everything about XLinks into a schema-based datatype spec
that could be applied without any alteration of the pre-existing markup.
However, something everybody seems to be forgetting is that XML modules
are almost entirely optional. The XML spec requires very little
compliance with anything beyond the basics. You don't -need- to use DOM.
You don't -need- to use XSD.
So, if what we're -really- talking about is "XML vs the Web", the first
place we need to start is not in tearing down all the old cruft, but in
figuring out what needs to be done to existing web technologies to make
XML on the Web workable, with a mindset to compatibility.
I'd never argue, for instance, that we rend all the native behaviors out
of HTML -- it should still display those things natively. But if we
modified CSS to allow it to do everything HTML does natively, -and- to
override native behaviors, then we could have webpages made out of any
markup language people like.
Imagine being able to apply web technologies to all the XML that largely
exists behind corporate walls -- ATA 2100 and the five billion documents
that Lexis-Nexis has, etc.
I think that if we actually go back to basics (separating content from
behavior), we can come up with simple, elegant solutions that will make
XML on the web a reality without breaking any of the old stuff...crufty
or not.
And for my money, actually being able to display any markup I like with
all the power of HTML and its related technologies would be a grand
start that requires very little work, comparative to some of the
thornier issues I've seen bandied about the last week or two.
--->Ben
On Tue, 2010-11-30 at 22:17 -0500, Kurt Cagle wrote:
> BTW, I'd also agree with Dave Pawson's point here. Maybe it's time to
> reinvent the wheel with an SXML. We know what worked and what didn't.
> We still have a fundamental disconnect between XML and Javascript that
> needs to be addressed, and attempting to do it within the rubric and
> process of XML looks to be a non-starter.
>
>
> Some areas that I'd like to see -
>
>
> * Creating a unified XML/JSON layer, including agreed upon
> serializations and working to extend both XML and JSON to be
> fully transferable.
> * Creating a consistent mechanism for the depiction of both
> closed ontologies (XML) and open ontologies (RDF/OWL).
> * Creating a consistent path/query layer between the syntactic
> (XQuery) and the semantic (SPARQL)
> * Working with ECMA and the browser and mobile vendors to
> provide a universal first class SXML representation.
> * A rethinking of distributed linking systems, especially in
> light of the emergence of RESTful architectures
> These are big areas. They'll require that people check in preconceived
> notions at the door, and they will require champions that will be
> willing to both put in the effort to be the reference implementors and
> to defend the specifications from being co-opted by a given vendor.
>
>
> Kurt Cagle
> XML Architect
> Lockheed / US National Archives ERA Project
>
>
>
> On Tue, Nov 30, 2010 at 10:07 PM, Kurt Cagle <kurt.cagle@gmail.com>
> wrote:
> +2 on DOM
>
>
> DOM was necessary at the time - you needed a way for external
> languages to have low level access to XML in order to create
> tools (such as XPath, XQuery, E4X, etc.) that provided higher
> level accessibility. The problem was that rather than build
> XPath or E4X like layers into the browsers, the browser
> implementers took the DOM spec as the baseline for working
> with XML, and it took years for any kind of advanced
> technology to work its way into even a few of the systems (and
> that often badly). Trying to invoke an XPath statement and do
> anything useful in Mozilla is hideously painful, even though
> it would have taken remarkably little effort to make it more
> usable. E4X (or at a very minimum the creation of XML objects
> as transparent entities that can be queried via a path
> language) would have gone a long way to solving that, but
> there was very little push to get behind it within the W3C,
> because it was seen as an ECMAScript issue.
>
>
> About namespaces - people moan and complain about namespaces,
> but the biggest problem with them overall was that there were
> two schools of thought with regard to namespaces within the
> XML community. The first saw namespaces as a way of applying a
> class-like semantic to a closed but potentially mixed
> ontology; this is actually most evident in languages such as
> XQuery where you have modules that contain functions, which
> provides an (rough) analog to classes and methods within an
> OOP environment. The second saw namespaces as a way of
> identifying authorities, and I believe this is where
> namespaces largely failed. Anyone who has pored through NIEM
> or XBRL records understands how confusing such authority
> layers can be, even by people who otherwise understand the
> specifications themselves. We've had discussions on this
> particular forum before about alternatives to namespaces (with
> some remarkably good ideas being proposed) but rather than
> actually pushing the changes into some kind of action in most
> cases these ideas ended up becoming simple gripe sessions.
>
>
> Which to me points to the bigger problem - process. AJAX came
> about because a lot of people threw a lot of ideas around,
> kept the ones that worked, and dropped the ones that didn't.
> XML on the other hand established apriori specifications
> through long and involved processes, typically with
> comparatively little input from the developer community, and
> often shooting for broad generalized solutions rather than
> creating things from scratch and then seeing how they could
> fit into the XML community overall. Significantly, if you look
> at the most important "standards" in the W3C canon, the ones
> that had the biggest staying power usually were produced by
> one person and then "smoothed out". XPath (James Clark), XSLT
> 1 (James Clark) and 2 (Michael Kay) falls into that category,
> as does RNG (James Clark again), Schematron (Rick Jelliffe),
> XProc (Norm Walsh), XForms (Mark Birbeck and Micah Dubinko),
> XQuery (Michael Kay), RDFa (Michael Birbeck) and the like.
> That's not to say that other people didn't contribute
> significantly to those specs, but it was usually the reference
> implementer that was defining the characteristics of the
> specification in the first place.
>
>
> Look at the specs that people typically complain most heavily
> about - XSD, Namespaces, SOAP and the WSDL stack, SVG (good
> idea, but one that succumbed quickly to corporate dominance),
> XML-RDF, as well as some downright obscure ones like SML (the
> Services Markup Language), and of course, the brilliantly
> failed XHTML/WICD concepts, which got bulldozed out of
> existence by Ian Hixon who recognized that modular XHTML was a
> good case of XML jumping the shark. All of these were specs
> that came either came into existence or were quickly taken
> over by large committees representing corporations with strong
> vested interests, and with no single champions strong enough
> to fend off those interests in the name of communal
> standardization. They became catch-alls for a couple of key
> corporate players with strong agendas (SVG and Adobe is a
> prime example) that were looking at creating a "specification"
> that they would have a strong lead in upon completion. SVG is
> finally gaining traction universally, and curiously enough,
> most of that traction has come about in the non-animating
> portion, because that represented what was really needed in
> most cases.
>
>
> I think the danger that we face at this juncture is throwing
> out the good with the bad. XML has gained adoption not because
> of the XML standards community. It gained adoption because it
> filled a very definite niche - document-centric structures -
> quite well, and over time more and more "data" is being
> represented this way because data is becoming more robust and
> document-like. That's not going to go away because the browser
> community (which I see as becoming a decreasingly important
> part of the overall equation as mobile devices become the
> norm) has decided that everyone needs to be AJAX developers.
> It does, however, mean that the XML standards community needs
> to understand that it has to reach out and recognize that the
> infoset can have multiple serializations, that not all those
> serializations are going to involve angle brackets, and that
> collections of content ultimately will end up becoming more
> important than individual documents.
>
> Kurt Cagle
> XML Architect
> Lockheed / US National Archives ERA Project
>
>
>
>
>
> On Tue, Nov 30, 2010 at 8:21 PM, Liam R E Quin <liam@w3.org>
> wrote:
> On Tue, 2010-11-30 at 19:59 -0500, Elliotte Rusty
> Harold wrote:
> > On Tue, Nov 30, 2010 at 6:47 PM, Amelia A Lewis
> <amyzing@talsever.com> wrote:
> >
> > > I'm increasingly of the opinion that XML "jumped
> the shark" with the
> > > XML Namespaces specification.
> >
> > Namespaces was at least the nose of the shark. I
> think we really
> > should have insisted that prefixes not be able to be
> bound to more
> > than one URI in the same document.
>
>
> There are a lot of problems with the namespace spec. I
> don't think that
> particular restriction would be helpful, because it
> would complicate
> reuse of document fragments. But, lack of consensus at
> the time
> reflected lack of implementation experience --
> namespaces should have
> been put on hold for a couple of years at least.
>
> > Schemas are bad, but ignorable.
>
> I think XSD has some OK parts, and, like namespaces,
> meets some real
> needs... while causing lots of problems of its own.
>
> >
> > I remain convinced, though, that the single biggest
> mistake was DOM.
>
>
> +1 here, totally agree.
>
> Liam
>
>
> --
> Liam Quin - XML Activity Lead, W3C,
> http://www.w3.org/People/Quin/
> Pictures from old books: http://fromoldbooks.org/
>
>
>
> _______________________________________________________________________
>
> XML-DEV is a publicly archived, unmoderated list
> hosted by OASIS
> to support XML implementation and development. To
> minimize
> spam in the archives, you must subscribe before
> posting.
>
> [Un]Subscribe/change address:
> http://www.oasis-open.org/mlmanage/
> Or unsubscribe: xml-dev-unsubscribe@lists.xml.org
> subscribe: xml-dev-subscribe@lists.xml.org
> List archive: http://lists.xml.org/archives/xml-dev/
> List Guidelines:
> http://www.oasis-open.org/maillists/guidelines.php
>
>
>
>
>
>
[Date Prev]
| [Thread Prev]
| [Thread Next]
| [Date Next]
--
[Date Index]
| [Thread Index]