Perhaps another way to make my point is that is no reason (or is there?) that a logical or sematic analysis of data would expose the properties that might be the most idiomatic or useful for serializing RDF into XML. To say it is just a matter of pointing to some part of the graph and let it serialise off from there is handwaving, isn't it? The data is rarely complete enough.
For example, say I have a big pricelist in RDF with lots of items and different price conponents. What reason would there be for our triples to say what they correspond to some standard XML idiom, such as say in an HTML table, until the decision is made that we want to generate HTML tables?
Or if I wanted my pricelist to be arranged alphabetically. That is needed to serialise, but outside any triplets and not in any schema mapping.
Or if I wanted serialise the Thai data out while performing their complex Unicode normalization. Nothing to do with the triples.
Or if I wanted to make sure that products with a missing graphic use some default graphic.
Or that ISO 8601 dates that turn out to be in the current Reiwa era in Japan need to have a different element names. Our pricing catalog has no Japanese Era names. And looking up the era by traversing RDF triples until you get to some date to list map, or whatever, provides no advantage compared to the developer looking it up, or a web service that does date in/era out etc, does it?
The devil is in the details. That we talk of "separating presentation from data" glosses over the frequent reality that idiomatic XML functions as a presented view, it is a kind of publication for a readership not some pure data serialization: developers and maintainers, humans, view documents and need them to make sense in the unmediated characters as text, which requires that even "presentation-neutral" "pre-collation" XML is constructed using information that is not part of the the triples.
Cheers
Rick