I want to avoid details because they are always so limited in the context of language and framework. But if you try creating some XML with .NET libraries then simply changing from UTF 8 to UTF 16 you should realise how poorly these libraries work and how many ridiculous hoops you have to jump through to avoid memory leaks. It is simply a matter of curtailed investment, still stuck with original DOM-based processing. It forces people to use the xml serialisation libraries in ways never intended, just to allow use of StringBuilder so the character encoding can be specified as UTF 16, which, if the XML is at all dynamic involving more than merely a strongly typed document, results in memory leaks building up over time. Very bad. To try to avoid the serialise, the developer has to use XmlTextReader and convoluted code to try to get it to allow UTF 16 output with UTF 8 input. Not nice. You then get to see how immature these libraries are and how little investment has been made in getting them to work with each other and improving them with time. They are stuck in 2005.
A reason so many things are stuck in 2005 (and earlier) is that developers refuse to upgrade to tooling that use newer specifications.