OASIS Mailing List ArchivesView the OASIS mailing list archive below
or browse/search using MarkMail.


Help: OASIS Mailing Lists Help | MarkMail Help

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index]
Re: [xml-dev] The perils of using the @ symbol in JSON key name ...mapping JSON to XML, Schematron, XSLT, XPath, and/or XQuery

Ghislain, I am not sure in which situations it would be justified to speak of an impedance mismatch, let's explore together.

If an application does not do anything with the JSON entities except for
   (i) evaluating their contents
   (ii) modifying the contents and
   (iii) creating new entities,

I believe there is absolute no mismatch. The deal:
   (a) a format (JSON)
   (b) an internal representation (node tree)
   (c) functions connecting them (parsing and serialization function).

The overall processing can be summarized as
   (1) initial parsing
   (2) operations on the internal representation
   (3) final serialization.

This cannot be reduced by exchanging one internal representation by another. The appropriateness of a particular choice of internal representation depends on the ease and performance of the operations to be applied to them, possibly also on the performance of the associated parsing and serialization function.

An impedence mismatch can only occur if the JSON entities are required to be used also as a DIFERENT INTERNAL representation - perhaps required to be used as _javascript_ objects. But one should look carefully if this is the case - or if one just thought so. So data management of JSON resources might in many cases not incur any impedance mismatch. If you have, for example, file directories filled with JSON documents and you want to do some aggregated processing - reporting, validation, modification - this can as a rule be much more elegantly achieved when using an internal node tree representation, as the navigational power of XPath 2 is matchless. For example, a single line of XQuery code (admittedly, a longish one) can return to you all nodes found anywhere in any of those documents matching a set of conditions of any complexity, without requiring you to know the location within the documents. Then add one line to learn all file names containing such nodes, or modifying those nodes, or letting them flow into an emerging report, etc. etc.


Ghislain Fourny <g@28.io> schrieb am 8:48 Dienstag, 18.August 2015:

Hi Hans-Jürgen,

> Concerning the "cost" of transformation one should also remember the fact
> that the program works with an internal representation anyhow, not the
> document text. So one must take care not to confuse the mental
> transformation ("This is JSON, so to use it as XML I have to transform it,
> haven't I?") with any actual one - there is none, there is only a parsing of
> text into internal representation and a serialization of internal
> representation into text. So if the input is JSON text, treating it as XML
> only involves an *alternative* parsing and an *alternative* serialization,
> not any additional transformation.

You are making a very valid point here. The conversion can of course
be optimized to a direct "cross-parsing" to the desired memory representation.
Usually, the memory representation is supported by a data model such as the XDM
for XML.

However, this push-down functionality relies on the querying engine,
meaning that
one needs to find one that does so and with the required JSON-to-XML
mapping, or needs
to be able to tamper with the source code (but there are a few open-source
implementations out there).

I think another argument here for sticking to JSON data models and
querying languages
is to keep the technology stack lean and avoid gluing one's way
through impedance
mismatches as much as possible.

Kind regards,

[Date Prev] | [Thread Prev] | [Thread Next] | [Date Next] -- [Date Index] | [Thread Index]

News | XML in Industry | Calendar | XML Registry
Marketplace | Resources | MyXML.org | Sponsors | Privacy Statement

Copyright 1993-2007 XML.org. This site is hosted by OASIS