I think analytical clarity is helpful.
If you know your schema will be long-living, you can try to organize it into ways that reflect the separation of concerns (i.e. respecting Conway's Law.)
So it is reasonable for the makers of a schema to have target capacity limitations, and for them to be part of a schema for validation. But it is necessary for the schemas to be capable of being organized that way: to provide hooks and methods where constraints and values can be overridden.
For example, consider the UBL Code-lists. These are lists that a schema uses, but which are maintained at a different cadence to the schemas, by different folk. That poses a maintainability issue if your schema language cannot look outside itself for data.
I don't see that changing capacity constraints over time, as the world evolves, is much of a different problem.
The obvious way for users of XSD to deal with these things is to have a base schema with minimal constraints and maximal openness, and then regular specific schemas that extend and restrict it, which provide those evolving details like system capacity constraints. Which treats schema evolution as a publishing problem. (Personally, I don't think it is good enough and you still need some homemade layer on top of it, but it goes some way. )
Regards
Rick