The underlying reason is not technical but economic.
Many SGML producers had an efficiency consideration: average markup characters per element tags. A "p" element has 6; if you can omit and imply the end tag, it has 3; if you can omit and imply the start, it has 0. If you can replace a long string with a short reference a la MarkDown, you can get down to 1 or 2 characters per tag.
I remember someone proudly telling me their large document system had achieved average one markup character per tag pair: very terse!
Why? Disk space was expensive; data transmission was slow: computer data busses were slow; computers and terminals were expensive; and trained fingers were expensive. The CPU resources to imply markup was not so bad compared to the cost of vanilla parsing: grammars/stack machines are quite efficient. Something as verbose as XML was a non starter, and all the complexity of analysis and SGML DTDs could be justified economically for large concerns.
By the 1990s that was all on the way out, including that new cheap fingers were now viable, offshore.
(So all the demand from industrial users that terseness was an essential feature dried up. And people who typed in text editors had little voice, while those who claimed we would all use tools to shield us from the markup were well represented.)
With no industrial demand from the data input side, the (we) middle-ware/processing people (who would often normalise the SGML for pipeline processing anyway) had more free reign to jettison terseness, which only got in their way.
Once we middle-ware types were satiated from having our way with the SGML standard with almost no concern for data entry requirements, the backend people besieged and begat XML Namespaces and XML Schemas. Terseness no longer prevented simplicity, verbosity did.
And, of course, those people who did need effecient data entry reinvented the 1970s with MarkDown.
I don't miss markup minimisation. It was a brilliant idea to piggyback validation and minimization.
But with no minimisation, a major constraint on DTDs (they only needed to model the grammar enough to support minimization) disappeared, and this set the requirements for schemas adrift: it became "a good schema language is one that a computer scientist or object oriented programmer recognises as doing the kinds of things the way expect a schema language would do": inheritance, extension by suffixing, and so on. And if DTDs were too limited a class of grammars then the answer is more powerful grammars. *
Cheers,
Rick the Raver
* My own feeling now is that XML's major shortfall is that structoral markup (and base simple value types) has to be declared by schemas. XML attributes could use == for assigning ID/keys, for example, and =# for referencing an ID/key. It could recognise numbers as well as string literals as attribute values.