Just want to comment on the terseness angle.
In other domains the need for a data preparation step to tame obnoxiously formatted data before it is used is generally accepted and not controversial. When it concerns XML however no such step is contemplated and instead we get specious interminable whining about bloat and verbosity.
Here is an past example of 324k JSON data naively converted to 1.16MB of XML which reduced to 350K of XML after some 'data prep'.
I believe it's generally the case that with such a step the size of "equivalent" JSON and XML is within about 10% of each other.