OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.


Help: OASIS Mailing Lists Help | MarkMail Help

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index]
Re: [xml-dev] Was OOXML's problem that it should have used JSON not XML?

Hans-Juergen: Yes, the difference is perhaps more in people's expectation of what the information language promises.

Michael Kay:  But who would process OOXML using XSLT in that way? I have built several systems that generate OOXML, and one that reads it and substitutes some values, but I think XSLT (certainly 2.0) is often the wrong technology for complex processing of OOXML inputs, for example because of the flatness, the ZIP, MCE, versions, and the relationships files adds to the indirection.  I don't think support of expressing "semantic relationships" was ever a goal for OOXML (especially since the i4i case  when MS had to disable some XML support). 

Doesn't all it means in a general purpose language with JSON is that you would have to adopt a particular programming pattern when iterating throught the JSON tree: you maintain your visitation stack to allow parent::* access and make indexes to allow keyed refences? 

I know what you mean by bottom-up versus top-down, and I am not sure that is exactly the case. The original XML format for Word 2003 was top-down and more like early ODF-like (like a neater RTF in XML), but when the got down to the nitty gritty it got unworkable to proceed, so they had to start again. What they did the second time around was have a stronger top-down design patterns [Open Packaging/ZIP, macros (Markup Compatability and Extenions),  versions, relationships, separation of concerns with stylesheets and graphics etc in separate files, the properties pattern of attributes) and then tried to pour the their binary format into that, top-down.  It may look like bottom-up chaos if you are just expecting a single file, but is systematic.  (And then, the way of all flesh, when these extractions failed or were not developed in time, you ended up with lots of bottom-up carbuncles. SNAFU.)

I think we have corresponded before that I think XSD should not even be classed as a "web" technology because it does not allow validation of webs of documents-- it is a file technology: XSLT 1 at least had the document() function, and XSLT 3 has a much stronger story as a web technology with xsl:source-document and xsl:collection etc.  Consequently when you get to something like OOXML, the schemas provide no validation between documents in what is a highly linked collection.

Murata-san:  Oh, I am not suggesting OOXML be replaced by JSON now!  Yikes! 

It seems there is a schema language for JSON, JSchema, and there is a converter from XSD to JSchema.  And, yes, a large data structure or document needs some method of validation.  The work and information required would not be much different. 

But does anyone who implements OOXML consumer applications actually read it into a DOM with the XSD and use the PSVI?  (Or does anyone use the schemas for dynamic data binding?)  I suspect developers would use the schemas to generate code (i.e stub classes for import functions) and then maintain the code by hand. Do you have a feel for this? 

(XML-DEVers may not be aware, but one of Murata-san's jobs for the last 10 years has been diligently working through the QA on ISO OOXML, trying to keep up with a moving target, correct where the initial documentation was wrong or speculative or incomplete, and making sure it has the information that stakeholders --such as non-MS developers-- require. Very important work, IMHO. )


[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index]

News | XML in Industry | Calendar | XML Registry
Marketplace | Resources | MyXML.org | Sponsors | Privacy Statement

Copyright 1993-2007 XML.org. This site is hosted by OASIS