> As David alludes, this shows the lie of JSON being more human readable (at least as a general rule).It's true that JSON is more human readable than XML for tree-structured data.Not recognizing that will just help JSON getting more users.I think the true strength of XML lies in its DOM.My goal was to prove that it's possible to serialize XML DOM documents differently, in a way that is as readable as JSON.I think I've succeeded to prove the point. My SML format is strictly equivalent to standard XML. Files can be converted between the two formats reversibly.And for pure tree-structured data, it's arguably even simpler to read than JSON, as it uses even fewer punctuation characters.For mixed data, I've failed. Not in the sense that it's not equivalent to standard XML. It is. But in the sense that it's not simpler. It's more complex actually. (But still shorter though.)
So if you work with mixed data, like in XHTML data, then of course keep using standard XML.And if you work with XML-based languages or trees of data, with no mixed data in them, then I think most people find it much easier to convert files to my SML format to review or edit them.De: "yamahito" <yamahito@gmail.com>
À: mailbox@johnmccaskey.com, xml-dev@lists.xml.org
Envoyé: Mercredi 13 Septembre 2017 22:39:37
Objet: Re: [xml-dev] Another way to present XML data> p {u underlined ;i italic}As David alludes, this shows the lie of JSON being more human readable (at least as a general rule).> Sebastian was shocked that I would expect different results to be passed into the application depending on whether a DTD/Schema as used or notThis surprises me: surely the ability to redefine entity expansion alone sets a precedent for DTD dependency?> It would, of course, be much better if we fixed the problem and went back to the rule that space in Mixed Content is significant, and all else is insignificant, when it is possible to identify the context as Mixed Content.Of course, part of the problem is that it ISN'T always possible to identify content as mixed content reliably without more information.On Wed, 13 Sep 2017 at 16:52 John P. McCaskey <mailbox@johnmccaskey.com> wrote:For background on whitespace and mixed content in text encodings such as TEI, see https://wiki.tei-c.org/index.php/XML_Whitespace.
--
On 9/13/2017 11:42 AM, Peter Flynn wrote:
On 13 September 2017 11:32:10 yamahito <yamahito@gmail.com> wrote:
> The case I often find processors screwing up is:
>
> <p><u>underlined</u> <i>italic</i></p>
>
> Note the significant whitespace between the <u/> and <i/>
This case is extremely common, and once of the places we messed up.I argued long with Sebastian over it: he maintained that because the application must [apparently "must"; I never understood why] always receive the same information from the parser -- regardless of whether the parser has used a DTD/Schema or not -- the rule of removing white-space-only nodes had to be honoured in all cases.
I respectfully disagreed, holding that iff the DTD (in the case we were discussing) made it clear that the context was Mixed Content, then white-space-only nodes were *significant* and *must* be passed intact to the application (ie neither normalized nor annulled).
Sebastian was shocked that I would expect different results to be passed into the application depending on whether a DTD/Schema as used or not; I attempted to persuade him that a FIXED attribute or a REQUIRED attribute with a default value would be a case in point, but we never resolved the matter satisfactorily.
It's easily fixed in the classes of text document with which I usually deal, at the cost of a few cycles: in every XSLT template which matches an element type in Mixed Content, make the first action a call to a named template which checks if the immediately-preceding node is an element node of a type which would normally be spaced in the class of text documents you handle; if so, add a space token to the result tree.
This needs more refinement if, for example, you deal with TEI documents containing character-level element markup *within* words (eg lingustic or editorial markup) where adding space would be an error. But in the conventional run of textual material (eg XHTML, DocBook, JATS...) I have found this rarely causes a problem.
It would, of course, be much better if we fixed the problem and went back to the rule that space in Mixed Content is significant, and all else is insignificant, when it is possible to identify the context as Mixed Content. But that would cause too much pain at this stage; it's hard enough as it is to persuade text owners to consider XML as things currently stand -- to change punts in mid-stream would not help.
///Peter