To throw a practical problem in: imagine a global company not unlike where I work: one persistant problem we oops they have with their XML systems, but not with their SGML systems, is that some of their legacy DTDs ported from SGML specify large numbers of default attributes (sometimes 20 default attributes per element), and their documents, which can be large anyway, can multiply in size with useless attributes.
When using Omnimark, we have no problem, because the infoset it operates on includes information on whether a value was defaulted or not. So doing null transformations on documents is possible. With eg Xslt, there is no way to know. So you have to fake it by coding the ATTLIST default rules into xslt to strip out the values. Fragile and bothersome.
Now would having a data model have fixed this? Probably not: the Infoset decided that defaulting was not interesting information and any data nodel built on that would make the same decision. Transformation tools have been built on a data model that is one step removed from the actual XML, but it has meant that one major optimization does not flow through the pipeline.
I dont see that a data model would help move the technology in a direction that would be more optimized for large numbers of default attributes. If you make things, let them solve real problems.
Cheers
Rick